* Described at https://news.ycombinator.com/item?id=10705926 and the other links there.
Of course, now I know how good a job moderators have been doing, and better understanding of the toll that role can take. They deserve a lot of respect. The idea this is a subject worthy of systematic study sticks with me though not yet clear about the next steps. Any case, the implications of the story cover a large space in economics, politics, ethics, law, and medicine. It will be fascinating to see how it develops.
An effective automated approaches will provide a big advantage in narrowing down the "judgement space" that must decided by humans. To the extent that's possible, the key benefit provided is reducing the exposure of moderators to stressful situations as the article describes, and indeed that's a very helpful development.
As the article points out the whole domain of moderation practices is a minefield. But now I wonder if there isn't also a risk of automated review making classification errors re: user behavior. Plausibly automated systems can be tuned finely enough to avoid serious errors, and support human oversight to catch and more easily resolve edge cases. Automated moderation systems will need to have such qualities in order to be able to reduce human burden as intended.
'Members of the team quickly showed signs of stress — anxiety, drinking, trouble sleeping — and eventually managers brought in a therapist. As moderators described the images they saw each day, the therapist fell silent. The therapist, Mora-Blanco says, was "quite literally scared."'
'Beth Medina, who runs a program called SHIFT (Supporting Heroes in Mental Health Foundational Training), which has provided resilience training to Internet Crimes Against Children teams since 2009, details the severe health costs of sustained exposure to toxic images: isolation, relational difficulties, burnout, depression, substance abuse, and anxiety. "There are inherent difficulties doing this kind of work," Chen said, "because the material is so traumatic."'
butters was in charge of filtering cartman's comments so cartman would only see material that would be unable to offend him
cartman seemed to flourish with his new perceived support and popularity and butters deteriorated
i thought it compelling to look at how they chose to do butters
'this material is offensive! you should deal with it instead of me'
cartman was characteristically unconcerned about butter's well being as long as the supportive comments kept coming in
i understand the satire loses some weight when you acknowledge that those that hire these moderators are mostly companies with a vested interest in keeping their users blissfully unaware
10,000 individual users being inadvertently exposed to 1 piece of upsetting material could potentially mean 1 moderator then has to review 10,000 pieces of upsetting material
also, the deluge
mailroom syndrome, it just keeps coming
honestly, computer vision is the only way to do this sort of moderation with any semblance of success and unfortunately i feel all of this human debris is just considered collateral damage until we get it functioning
I think they'll go down as two of the greatest commentators of our generation.
i failed to because i refuse to link to that hulu run abomination
bah, south park, the show where the creators themselves put their work on the internet to be viewed freely and for free
what a unique expression of the creators' personal beliefs
a stance that had a profound impact on me, and my peers
when hulu bought the rights i felt my skin grow cold, but i held out reservations in the hope that someone at hulu was just a fan and wanted to improve south park studio's backend whilst respecting the implicit wishes of the creators that their work is free and readily consumable by anyone regardless of manufactured scarcity or paywalls
now this shit is the new normal:
This episode is currently not available at South Park Studios
There was an error playing this protected content. (Error code: 3365)
hulu has zero respect for the series or its creators opinions on how open it has always been, and should be
fuck you hulu
it would be unsurprising that a subset of moderators develop PTSD-like symptoms related to that experience.
Is that really the case? Most moderation is done on textual-based forums, like the one we're on right now. I'm having a hard time imagining the right combination of words on a screen to generate trauma in the requisite quantities for someone to wind up with a clinical disorder because of it.
Personally, I've been on both sides of this coin. I've been the guy kicking trolls off of a decent sized board, I've been the guy getting kicked off because I annoyed the wrong person.
At no point did it ever progress beyond internet drama. The meanest, nastiest person I could imagine could type in words along the lines of how they'd like to kill and fornicate with my mother, and my, and most other people's response I wager, would be "That's cool, kid. Bye now. Ban"
Maybe there's a case to be made for becoming jaded after a while of dealing with the worst (hang out on the meta Stack Exchange sites for a publicly visible example) - but PTSD symptoms? To me, that both overly glorifies the troll "i can type words so good that I can make other people have legitimate mental breakdowns!" and makes light of the suffering by people who actually have PTSD (who have seen things like people dying). It feels like a lack of perspective, brought on by spending a lot of time online. Unlike real life, disconnecting from online drama is always a button press away.
The article clearly describes moderators watching videos flagged as offensive, and these videos included "amateur and profession pornography" and many that contained "child abuse, beatings, and animal cruelty". Furthermore the article discusses moderators having to deal with videos shot during the Iranian revolution in 2009, including the murder of a young woman, "a shaky cell-phone video captured her horrific last moments: in it, blood pours from her eyes, pooling beneath her."
The article presents many more examples of the enormously disturbing tasks assigned to moderators. Based on tens of years in clinical practice I'd consider it likely that viewing such material, especially seeing it frequently, can precipitate acute and chronic stress disorders in vulnerable individuals. I understand the point this is only hypothetical in the absence of systematically collected data, call it a clinician's hunch if you want, but still I'd put money on it.
Rereading the article confirms for me that it is an excellent piece of journalism, perhaps even exceptional in the current era. Moderators were exposed to far more than words, in a way similar to 911 operators suffering exposure to trauma without being at the scene. Forum moderation has very little in common with the subject of the article.
You are severely underestimating the effects of this kind of experience on people in such positions. You should learn more about stress disorders, the complexity of which is bound up with the unique and highly variable attributes of individuals. Some people are much more resilient than others making it misleading to generalize about how people will respond to given situations.
Most of all, I strong suggest you (re)read the story and if capable, allow yourself to empathize with the plight of the moderators.
Uh, why? Am I a counselor?
> allow yourself to empathize with the plight of the moderators
To what end? What possible constructive purpose is served by allowing myself to share (or wallow) in the negative emotions described by others? Does it stop "bad things" from happening? Does it lessen their pain? Does it lessen anyone's pain? Or does it increase the overall amount of suffering in the world?
brb, going to voluntarily take a job that makes me feel bad, so I can cry about it on the internet...
Even on a textual-based forum like reddit there are two subreddits which I usually visit (/r/syriancivilwar and /r/combatfootage) where one can see links to videos of dead people or worse (that's because very nasty things happen during wars). I could very well imagine someone getting PTSD-like symptoms after having moderated videos/links to videos of children who have just died because of chemical-weapons attacks. That is what nightmares are made of.
For starters, this is not true. Stuff that happens online has real world consequences. People get hired, fired, meet people they marry, get doxed and have their personal contact info put out there and on and on. The Internet is not disconnected from real life to the degree that your comment makes it sound.
I was a military wife for two decades. I lived in the High Desert where I dealt routinely with coyotes and large spiders and poisonous snakes, etc. I have spent time on health forums where deaths were so routinely reported that they had their own practices for how to deal with death, their own culture surrounding that so to speak. These experiences have left me somewhat thick skinned.
I read accident claims for over five years at an insurance company. Because of the aforementioned background, it was rare for me to be disturbed by the detailed medical and police reports that I read for up to 8 hours a day, five days a week. Many of my coworkers found the work far more stressful than I did.
But I would absolutely not want to have to moderate the photo and video stuff described in this article. And I would absolutely feel personally threatened by people making ugly threats of the sort that you dismiss with "That's cool, kid. Bye now. *Ban".
I'm pretty thick skinned, but I'm also a woman. Most of the people who are moderators are women. A lot of this nastiness is absolutely aimed at women. I think it hits a lot closer to home for most women than it does for most men. Women tend to routinely deal with genuine threats to their welfare, such as stalking, harassment and rape, that men tend to be largely exempt from.
When you recognize that your work impacts real people and how safe they are, that also can be a burden. When I read accident claims, in most cases, no lives hung in the balance. That is part of the reason it did not bother me to read them: I was not deciding whether or not to report anything to the police or otherwise take an action that impacted someone's personal safety. I was only deciding whether or not to cut someone a check and whether or not to refer the claim to the fraud department.
In a few cases, I was trying to get it paid rapidly because the customer was broke and facing having their power cut off and nothing to eat in the house. But, really, what I did mostly did not have a sense of "And if I do the wrong thing, someone could wind up killed, raped, fired from their job etc." What moderators do can have such impacts. That kind of responsibility can wear on a person. Perhaps not you, but plenty of people would find it wearing.
The Internet is something of a Pandora's box, it seems.
I once started watching a movie with Monica Bellucci. Some way into the film, she is followed into a tunnel and brutally raped. I couldn't make it through the scene without getting sick to my stomach and feeling the need to throw up. It even made me want to cry. It was far, far too intense. I had to turn it off.
I can't help but feel awful there are people whose jobs are screening such things recorded from real-life events, who don't get to just turn it off. There's always another clip in the queue.
While, the tone could be construed as lacking in empathy, and the article clearly states that users are exposed to videos, the question can definitely be rephrased as a valid one.
When reading the accounts of the moderators, I of course immediately felt sorry for them - it must be pretty harrowing to watch a constant barrage of child porn, murders and gore (think being on /b/).I imagine it would leave you feeling pretty jaded and losing faith in humanity.
But my second thought went to all my friends who are doctors and see terrible injuries and ailments on a daily basis, or criminal lawyers who review cases which revolve around the gutter of human behaviour. What is the PTSD incidence rate amongst these two professions?
What sparks PTSD? Is it the context by which the triggering events are presented? Are lawyers and doctors more resilient to this as there is an expectation to be exposed to these types of things, and therefore the professions are self-selecting? Is it the relentless barrage of content which affected the moderators? Or was it the lack of empowerment wrt doctors/lawyers to make an impact on these things?
Going forward what can we do to better prepare moderators for this?
It's really nice to see light shining on this topic. Truly, this content moderation is the equivalent (equivalent for the internet medium) of the boundaries of debate enforced by the editors and owners of major media corporations on our previous mediums.
At some level, users can choose from a variety of forums
with different moderation methods, or none, and companies can try
to fit their moderation policies and practices to their users' expectations.
All of that develops in a cross between evolution and arms race,
with lots of moving targets - global variation, changing social mores, changing technology, the latest taboos and moral panics.
Exposing the details of how the sausage is made would help
the people who want to put pebbles (or worse) in the sausage
more than it would help the makers or consumers. If one
finds the sausage too bland, there is likely a spicier one
just a few clicks away. Or one could grind one's own sausage
If the rules have really hard edges. I think that's very unlikely. I think the substantial reason companies won't publish their moderation rules is almost certainly that they don't have much more than a few bullet points - and a page of "things the lawyers have said you mustn't let people say" - and otherwise leave it up to the whim of the moderators.
Having said that, it might well be illegal - or in contempt of court - to publish the list of things that are moderated for legal reasons.
Social media companies operate in a market economy.
These are not public utilities or government programs.
Feel free to propose a more cost-effective approach.
These issues come up for companies whose business models are deciding what people should see and hear. This makes them incredibly powerful, but it brings them into the bog.
This isn't a new problem, other than internet scale. Media companies have long - in having to decide what news to print - been faced with content moderation.
Finally, content moderation is inherently political. ModSquad - one of many booming companies in the social media moderation space - serves the US State Department, who makes no bones about the fact that it uses all elements of national power to affect what populations think and what access to information they have (it also runs the Bureau of International Information Programs - the State Department half of the US propaganda programme).
Like newspapers, TV shows, cable companies, and radio programs before it - this new media sees itself filtering content for the public - with all of the complexities, power, and grief that entails.
Lots of early Internet folks started out pro-free-speech and tried to be as hands-off as they could. But even if you're a stubborn libertarian, eventually you find out that it doesn't work.
Sooner or later you're going to need a moderation team (or do it all yourself), and there are going to be tough political calls. The bigger the site, the more crap you see and the more tough calls there are.
> The screener was instructed to take down videos depicting drug-related violence in Mexico, while those of political violence in Syria and Russia were to remain live. This distinction angered him. Regardless of the country, people were being murdered in what were, in effect, all civil wars. "[B]asically," he said, "our policies are meant to protect certain groups."
Among other things it puts Putin's decision to grab VKontakte from its founder in a new light. Presumably videos of Mexican violence (supported by American guns and by the Americans' desire for drugs) are not banned on VK, while Russian violence I suppose is now easily banned. It also makes him look less like a paranoid for accusing the US special services of having "created the Internet" (http://www.theguardian.com/world/2014/apr/24/vladimir-putin-...)
In May 2014, Dave Willner and Dan Kelmenson, a software engineer at Facebook, patented a 3D-modeling technology for content moderation ... First, the model identifies a set of malicious groups - say neo-Nazis, child pornographers, or rape promoters. The model then identifies users who associate with those groups through their online interactions. Next, the model searches for other groups associated with those users and analyzes those groups "based on occurrences of keywords associated with the type of malicious activity and manual verification by experts." This way, companies can identify additional or emerging malicious online activity
It's not hard to imagine how this could be misused: Automated guilt-by-association, and within 3 degrees of separation, triggering investigation by authorities (corporate or otherwise).
EDIT: And also this tool of automated censorship, though I doubt it will surprise many HN readers
PhotoDNA works by processing an image every two milliseconds and is highly accurate. ... Then PhotoDNA extracts from each image a numeric signature that is unique to that image, "like your human DNA is to you." Whenever an image is uploaded, whether to Facebook or Tumblr or Twitter, and so on, he says, "its photoDNA is extracted and compared to our known ... images. Matches are automatically detected by a computer and reported ... for a follow-up investigation"
Currently used for child exploitation images, but for what else too and by whom? They add that it's being applied to something more political:
Farid is now working with tech companies and nonprofit groups to develop similar technology that will identify extremism and terrorist threats online - whether expressed in speech, image, video, or audio.
And there's already been talk of people inside Facebook asking whether they're 'doing enough' to stop a Donald Trump presidency:
As well as cases where people were visited by police after making right wing comments online:
(Yes, I know it's from Breitbart, but the point still stands)
And there's a lot of talk about Twitter censorship, blockbots blocking people based on what accounts they follow rather than any actions and all that stuff.
Similar systems are already being abused by social networks, and this type of technology will only make cases like this more common.
> Yes, I know it's from Breitbart, but the point still stands
Not if you don't believe Breitbart
They are probably threatening to arrest people / ban Facebook if there are any maps of Kurdistan posted.
The reason I find it more insidious is that it affects the speech of and acts to silence even moderate people. Extreme speech is easy to spot; but when moderate speech encounters headwinds that divert its path, that's creepy.
I understand the goals here, don't need them explained, just pointing out that slashdot in its heyday was a terrific and informative resource and it did not resort to stifling.
When the article starts out with the story of Youtube, you read about the lists the moderators working off of to keep certain materials out of Youtube. And something was missing on the list. Any guess?
Yes, copyrighted material. I knew a guy who used to work at a competitor of Youtube. That startup was started by a Hollywood veteran (naturally), meaning he was very conscious of respecting copyrights. Youtube didn't care about copyrights, which started the upward spiral of more viewers, more uploads, and more viewers.
The end story is that Youtube took off, being bought off at a billion and making the founders rich.
Mixed feelings about this. Break rules to beat others in the game, and you end up winning. And most will agree this is not cool. And yet, because of rule breakers (some, not all), our world is advancing.
A comment I read in a comment section, that I saw that was funny.
"When someone else cheats, it's adultery. When I cheat, it's romance"
After this this story, I've come up with following:
"When someone else break rules, it's cheating. When I break rules, it's innovation"
2. No chance for moderation
And then there's the other part in the world of moderation, in which certain stories are not even given a chance for moderators to review.
NYT (New York Times) has abundant and quality comments on news articles. However, if you hang out there long enough, you will notice a trend.
Certain stories that don't help the goal of NYT don't get comments section at all. And if a story's comment section gets filled up with comments that seem to be hurting the agenda of that particular article, the comments section closes rather quickly.
And then there's Fox News. They don't even allow any way to post comments.
This kind of social engineering has been going on since humans became social animals, but with technology in the mix, those in power get more powerful.
It's a pretty good way to become large rather quickly, since illegal or legal questionable content brings in a lot of visitors. Then you just remove it, and watch the network effect/inflated stats lure people to your site.
"IIRC pg wrote an essay defending that concept, more-or-less (and where by "I", he meant "startups"). "
I also heard it mentioned as 'asking for forgiveness rather than permission'. Or more cynically, get big enough that you can afford to fight the inevitable lawsuits.
IIRC pg wrote an essay defending that concept, more-or-less (and where by "I", he meant "startups").
* The dark side of Guardian comments
* Why has the Guardian declared war on internet freedom?
* The New Man of 4chan
But these people have to. I wouldn't wish that on anyone.
(I'm trying hard to avoid making plays on "pediwikia" without success. And not faulting Wikipedia but rather those who'll post that kind of content, or other rot, if they can.)
If the Internet or the way that people commonly use it is going to continue to get more centralized, I hope platforms improve their ability to distinguish between the problem of "things people don't want to see" and "things people don't want other people to see", or, to put it another way, between "please don't show me things like this" and "please don't allow people to publish things like this". While users themselves might not consistently draw the distinction between the two, platforms, in principle, could.
The article touched in this issue in its discussion of the extreme variation in cultural norms, and the likelihood that one culture's vulgarity is another culture's lyr—er, that one culture's distaste for something will lead to considerable pressure for platforms to squelch it for everybody.
A common idea is that platforms have a right and even a responsibility to define their own community standards and then people can choose platforms that they prefer or that best suit them, much as people can choose the newspaper whose editorial policy and biases they find most agreeable. I think this notion has a lot to recommend it but it seems less comfortable as the patterns of people's usage of the Internet becomes ever more centralized; it also raises the question of what things can be considered infrastructural enough that people can reasonably expect (or at least accept) complete content-neutrality from them.
The article also provided an interesting reminder that most people are unlikely to be comfortable using communications systems that have no moderation or filtering at all. Among other things, that's an interesting challenge for decentralized and censorship-resistant systems; to become more popular and practical, they'll need to be paired with some ways that people can avoid unwanted communication, beyond spam.
edit: also, the "trillion or so dollars of value" that an "expert" ascribed to Section 230 is simply a euphemism for ads. Nothing else.
edit2: Wow. I just realized that the reason I've stopped getting racist hits on search engines in recent years is because they've been removed. Part of the range of discourse disappeared and I didn't even realize it. That's also got to be the reason that comment sections in papers like the Boston Globe, Chicago Tribune, and Washington Post have become racist cesspools. Forums where they could have their mutual appreciation societies away from normal people have been completely hidden from view.
Am I the only person who didn't think the internet was broken before all of this arbitration?
I feel like this is a pretty big jump to make. There's a difference between deciding not to publish and amplify child sexual assault, and the "boundaries" of free speech.
Whether their speech is free or not is irrelevant; YouTube has no obligation to publish it.
What I really want is to wade through the extra verbal fluff of a comment and get to its real points, so that I can determine if they are rational and logical. My primary method is by looking for fallacies which are tell-tell signs of a bad argument.
That still doesnt "fix" moderation, but I think such a system could be used to get rid of consistently illogical trolls to reduce the moderator workload at least.
Ideally, I imagine exclusivity of posting to be barrier 1, and moderation to be barrier 2. Some strange combinantion of hn Rules and /. Randomized moderation along with /. Tagging styles, would probably be the best.
Another factor to consider is that sockpuppetry has thrown the democratic balance of such systems off, and Im not yet sure how to deal with that.
Plus someone has to wade through the sea of false positives.
So, while exploitation, criminal activity, etc. can still be problematic, the issue of free speech as it pertains politics is less of an issue because you can upload the content to a group which has group policies allowing such content. So, for example, you might find a group which is welcoming of videos pictures showing police brutality, or the like or gang brutality [so long as the content isn't an excuse to portray something else]
All this is to say Flickr allows/allowed for more grass roots approach to moderation, other than meta moderation where the usual rules apply --ie to criminal acts, exploitation, etc.
In such a world, service providers would choose a level of filtering that fit their business needs, but would also let the free market decide what minimum standards were acceptable. YouTube's moderation expenses would simply focus on keeping them at level X (with a clear separation from some other site). Consumers, producers, and law enforcement would simply tune their dial to the level of content they found acceptable.
It seems like it would promote competition while effectively pricing free speech.
seemingly least of all the question of under what criteria does censorship become moderation?
i understand your sentiment, it is concise and well developed
but saying 'censorship' is a pejorative of 'moderation', or worse 'curation', seems dangerously disingenuous
censorship is a thing
> the practice of officially examining books, movies, etc., and suppressing unacceptable parts.
that definition lacks any sort of caveat of disapproval or association with moderation or curation
i would suggest you can make the same point in more honest language by saying: moderation is censorship you agree with, or approve of, or tolerate, or enable
the 'material difference' that i was trying to discuss was less about the first part of dang's comment and more about the second
> "Censorship" is a pejorative and "curation" is an honorific and "moderation" is either that or neutral.
calling censorship a pejorative, expressing contempt or disapproval, and calling moderation 'nuetral', to me, lends itself to the interpretation that the super set is moderation and censorship is a form of moderation you disapprove of
'censorship=-moderation' juxtaposed with 'moderation=+censorship'
it's funny to have received a response from dang, the hn moderator, because i have this.. self censoring :p.. feeling that any opinion i express in response will somehow be associated with dang's work on this site
i want to note, that though i am sure plenty of work goes on behind the scenes that i am unaware of, the times i have seen dang step into a thread and moderate explicitly it has been done with commendable tact and respect both for the community and the issue or user being addressed
> "Censorship" is a pejorative
is a terrifying sentence to me
This is kind of a long-winded thread about two sentences from 'dang ... not sure how much value is left in continuing it further.
in all fairness this is a 'long winded' thread about two sentences from my gp.. hence my continued interest in discussion
> whether someone chooses to use 'censorship', 'moderation', or 'curation' depends on how they view the subject at hand.
with this, i agree
> the difference may have more to do with whether you like a particular instance of it or not.
with this, i disagree
i read that as 'censorship is a pejorative', but you are suggesting i read it as 'censorship is usually a pejorative'
i agree that one can call an act of censorship by name to draw attention to thaer contempt for it, but if i note something is censorship, and someone responds by saying, 'that is pejorative' i am going to question that person's bias
Of course, I'm admittedly accepting there is a difference between censoring (the action that results in censorship or moderation), moderation (the subjective act that positively serves an agenda), and censorship (the subjective act that negatively serves an agenda). In so doing, I admit that my own use of the terms is subjectively informed by my reception of the content (and think it illustrates what dang was after).
That's how I always viewed it anyway. And that's from someone who's moderated quite a lot of forums and other internet communities.
Heck, there have been cases where scenes in movies and games have been 'reappropriated' as real life military or terrorist events by clueless nations and groups. For example:
So your system would have to figure out not just whether something is seen as 'offensive' or 'against the rules', but whether its from a fictional work that might be allowed on the site.
This article misstates when Google acquired YouTube. It was October 2006, not October 2005:
> According to a source close to the moderation process at Reddit, the climate there is far worse. Despite the site’s size and influence — attracting some 4 to 5 million page views a day — Reddit has a full-time staff of only around 75 people, leaving Redditors to largely police themselves
Which is about 270 million page views per day.
Those shadows behind the asides that move as you move your mouse are extremely distracting, and they overlap with the body text. For the body font they depend on users having Helvetica or Arial installed to render (neither of which I have) instead of using @font-face, so the text looks out of place.
Also, none of the background images loaded for me until I tried with a browser without ad-blocker and third-party tracking protection.
It's typical for web publications to be mostly bad yet sometimes good, regardless of one's politics. This article is good and interesting (and maybe also wrong in places, I don't know), so let's stay on that planet.
Anyone can be the victim of domestic violence, and anyone can be the perpetrator, but overwhelmingly domestic violence involves a male perpetrator and a female victim.
Even if you expand the definition of "violence" to include "abuse" you still see double the number of female victims of abuse.
Here's a source. They list their methods, they give the Excel sheets.
Here's one quote:
There were differences between males and females in the pattern of relationships between victims and suspects. Women were far more likely than men to be killed by partners/ex-partners (44% of female victims compared with 6% of male victims), and men were more likely than women to be killed by friends/ acquaintances (32% of male victims compared with 8% of female victims).
For fucks sake.
I think the future is keeping vulnerable groups apart from offensive groups. Hellbaning and honeypot bots are the future. White supremacist can talk to other like him all day long and there's no harm done. Ones that would like to talk to black people or just people who get offended by racism should get algorithmically administered silence or honeypot bots that will react in best way possible to defuse and calm down without engaging human in distasteful content.
Plus side of this solutio is that you can keep tabs on potentially dangerous people and react if they escalate or brag about physical harm they do.
Right up until one of them decides it's time to leave his little echo chamber and murder a bunch of kids on an island because he and his Internet buddies have convinced themselves that's the right thing to do.
Pretending that there's just one morality that everyone normal must adhere to is more 1984 than any of my ideas.
Do you think you can stop stupid people from doing harmful things by banning them from expressing their stupid opinions online?
Brevik is exception not the rule and it could be all avoided if he was allowed to speak freely and discuss his demented ideas with people that don't mind so he can be monitored.
You're right, but most of these companies like that mentality, because a user in that mode is stickier and more profitable.
And that's OK! All of these private platforms can find their own happy medium. Those who want to watch Nazi crap might not find any mainstream forum for that, but surely one of them is resourceful enough to set up a site for that purpose. This might make "keeping tabs" more difficult, unless of course the entire site is a honeypot created for that purpose, by other parties. (Admittedly I don't see much use in surveilling Nazis, but there are other groups that more often switch from talk to violence.)
Here's a recent example, not even the most egregious, just a recent one.
Noticed how many xx deep comment threads voted high up in the comment stack have been excised from the discussion.
It's disgusting. It's very often the case that the top voted contributions have been 'disappeared.'
Again, just disgusting. What exactly is the point of a 'democratic' news site, if there is this constant intervention from unaccountable authorities, constantly policing what information and opinions are allowed to be discussed.
Any information outside the scope of a narrow ideological agenda is summarily terminated. Is this the public square of the future that we want?
kn0thing, spez, What are your thoughts?
So clearly, you can talk about nearly anything you want on reddit. Your problem is that you want to be able to talk about anything you want in somebody else’s subreddit.
It’s as if you move onto my street, live in a perfectly good house, but complain about the fact that you can’t do whatever you like in my house.
Freedom to say whatever you want is not the same thing as forcing other people to pay attention to it. You need to either find like-minded people moderating a like-minded subreddit, or find a way to get other people interested in your own subreddit.
This business of demanding that everybody else give you a platform because “freedom of speech” is flat-out wrong.
Your own blog is the only free platform (for non-extreme definitions of “free.”) All other media involve some sort of overt or more subtle curation.
/r/ama, /r/iama, /r/askscience, /r/syriancivilwar (cited by news organizations as a source and informing their methodology)...
Moderation doesn't automatically connote good subs, but the correlation is very strong.