Hacker News new | past | comments | ask | show | jobs | submit login
Reddit and the Struggle to Detoxify the Internet (2018) (newyorker.com)
53 points by anarbadalov 20 days ago | hide | past | web | favorite | 69 comments



There is nothing "toxic" about the internet except for the centralization of communication on a handful of corporate controlled social networks that have perverse incentives to censor anything that might impact their advertising or other income.

There is no single solution for content moderation that is good. That is why giant communities controlled by a single entity with a single standard are inherently toxic themselves. The solutions are innumberable and should be. That's how the gordian knot of moderation is cut. Not one set of standards but a diversity of them.

Federation and peer to peer seem to be the only way forwards. Unfortunately most users of the big sites are there for monetary reasons and these reason don't translate to smaller sites. So they'll stick around until the censorship gets so tight the money dries up (ie, what google/youtube is doing now).


>There is nothing "toxic" about the internet except for the centralization of communication on a handful of corporate controlled social networks that have perverse incentives to censor anything that might impact their advertising or other income.

There are definitely other problems besides a centralized internet. Consider attacks by corporate and state level enemies in the form of astroturfing and fake news. Those issues existed outside of a centralized internet, and arguably can be addressed better by centralized access to information.

Look at the great firewall of China: do you think the Russian disinformation campaign could succeed there to the extent it did in the west? Free access to the internet lowers the barrier for bad actors. Having said that, I obviously prefer the western model of internet access, but I'd like to state that it doesn't come without its own set of problems.


"Russian disinformation campaign could succeed there to the extent it did in the west?"

What proof do you have that the Russians had a successful disinformation campaign in the west? I've seen none.


What blatant BS. We know from Facebook that the Internet Research Agency and other Russian backed organizations bought political ads with the clear intent to spread misinformation and inflame both sides in order to sow unrest and distrust in our political system.

To say it wasn't successful is just being obtuse.


Well, there were studies showing the impact was minimal to zero - that it reached very few people, constituted a fraction of a percentage of total political content on social media, and those it did reach were already likely to be hyper-partisan.

https://www.vox.com/conversations/2017/1/27/14266228/donald-... https://www.nytimes.com/2018/01/02/health/fake-news-conserva... https://www.nytimes.com/2018/02/13/upshot/fake-news-and-bots...

And then there's the DNI report, which if you actually read, paints a pretty lame picture of what actually happened, and in no way supports the idea that the election was impacted.


It's not the fake news reports that were the effective tactic. That was one of many they used in the attack (if you will).

This article is from within days of the one you posted:

https://www.wired.com/story/did-russia-affect-the-2016-elect...

It covers the other tactics used and how they were successful—namely false identities.

The US intelligence community saw it as enough of a threat going forward they acted for the 2018 midterm elections:

https://www.aljazeera.com/news/2019/02/disrupted-russian-tro...

The identities used were things like sockpuppet accounts and fake communities set up to look as if there was more virulent discord already organically occurring and dragging legitimate online communities and identities into the same discord.

Here in Canada CSIS has put out warnings for Canadians leading up to the federal election later this year about that very thing. They said it would happen, and now Canadian online communities are devolving into trolling and anger and vitriol like there never was.

https://www.cbc.ca/news/canada/twitter-troll-pipeline-immigr...

https://globalnews.ca/news/4518563/canada-target-russian-tro...

edit: Formatting


We need to be careful of reading too much into the wired article. First, the article makes statements like this, "So anyone trying to tell you there was little impact on political views from the tools the Russians used doesn't know. Because none of us knows." Well, actually the aforementioned studies shed some good light on the impact of fake news stories, and soundly debunk the idea that they were influential in the election. I'll take studies from universities over an opining Wired article any day.

The main thrust of the Wired article is that there were different tactics, but they had different goals. The article is saying almost explicitly that these other tactics didn't (and weren't designed to) change people's minds (e.g. sway an election). They were designed to foment conflict between groups already opposed, e.g. orchestrating protests for two opposing groups at the same street corner, turning opposing thought into violent action. To what degree this was organic after 2016 vs catalyzed by Russia is a great question to dig into, but it shouldn't be conflated with tipping a vote. Being that I'm rather condemnatory of any of these protest movements as futile and counter-productive shit-stirring or worse, I'm not terribly surprised about or sympathetic towards the "victims".

Edit: quoted some silliness from Wired article


The political views impacted were indirectly affected by the aforementioned online communities. Whatever the design of the groups, the resultant effects have been overwhelmingly political.

I don't know much about Ms. McKew as a writer other than she was present during the Russia-Georgia problems and would have been witness first-hand to Russia's prototyping of these kinds of tactics, as well as she's written for other reputable outlets like the Washington Post and was interviewed for Harvard Political Review[0][1]. Couple that with not only our intelligence agencies, or even the five eyes, but other national agencies like the AIVD (Netherlands) have all confirmed that this is happening. They aren't exactly green. And while I would be remiss to just dismiss a university study, I don't know who the researchers were, their methods, or their scope. If it's only about the fake news items, that stuff was almost passé by the time these other tactics really took hold.

If you take a look at the CBC article and the timing of the attacks (for lack of a better word), they aligned quite tidily with a continuing stream of anti-immigrant conversations online to the point where those vitriolic approaches to the subject of immigration were wrapped up in the yellow-vest marches here.

[0] http://harvardpolitics.com/interviews/molly-mckew/

[1] She's also a particularly prominant subject of attack for various [outspokenly] right-wing outlets and pro-Russian sources. It doesn't take much of a search to find that.


Yeah, political views were impacted in the sense of "quickened". I agree that the evidence supports that these tactics have been used and documented elsewhere (Ukraine being a great example) and the phenomenon of increasing destabilization seems to be progressively effective. That phenomenon is different than changing someones voting preference - all the studies we've cited (in the US specifically) discuss these tactics as "hardening" existing views, not changing them.

I'm only trying to compartmentalize these two phenomenon so they can be talked about separately, because there certainly is a narrative out there that the election was illegitimate because of election interference from Russia, and none of the evidence supports that. In fact, the DNI report explicitly disavows that idea. It is pursuant to the goal of fomenting institutional destabilization to irresponsibly push that narrative or allow a conversation about Russian meddling to implicitly support that narrative.

I'm not trying to minimize the concern we should have over a foreign government being an agent provocateur for civil disturbance. I'm just saying that it's contrary to the evidence to think that these actions swayed the election.


As far as I can tell, knowing explicitly whether somebody's political leanings were shifted is an impossible metric to test—earlier, now, or later. People are prone to lying about those things (either to themselves or to others for many reasons).

If you want to see one (non-scientific) example of media affecting someone's political outlook, I recommend https://en.wikipedia.org/wiki/The_Brainwashing_of_My_Dad

It's from before all of this and mainly focuses on Fox News'/talk radio style of vitriolic, non-factual reporting. Similar tactics were employed and amplified. I'd be surprised if they had no effect with the amount of noise that has gone up. I've personally seen it in others as well, but I understand that adds little to your search.


I hadn't seen these sources, thanks. Guess I'll have to update my priors a bit.


>order to sow unrest and distrust in our political system.

Intent =/= Actual Effectiveness

The current debt regime is doing a fantastic job at alienating a generation from the status quo. If Russia paid to piss in a Great Lake of piss, that doesn't mean the Great Lake of piss was created or even substantially altered by Russia.


They've been doing that sort of thing for a long long time. And so have we, and so does China, every state or corporation with the capability will do things like that. If your democracy can be subverted by some Facebook ads and twitter bots you have far more fundamental problems. I'd be more concerned about actual election tampering, for example a news media corporation giving debate questions to a candidate before the debate.


If you think that Russia is: a) the only one doing this and b) the most effective actor doing this, then you're the one being obtuse.

What Russia supposedly "did" to us did not even register as a blip. Russia's biggest crime in this case was coming from a place that Americans are remarkably naive and ignorant about.


I never said Russia was the only one or the most effective. I wasn't trying to get into a WhatAboutThem debate, just that saying Russia did nothing effective was wrong.

But seeing the sources above I might have been wrong about that. Still they certainly tried to be effective.


people who have worked for russian troll farms have given interviews:

https://www.wnycstudios.org/story/curious-case-russian-flash...


There's nothing in that interview that suggests any of this was effective (if it even happened at all). Even Correct The Record (Hillary's massive social media astro turf team), couldn't turn the election around for her and it was orders of magnitude larger than what is claimed the Russian's did in this podcast.

https://en.wikipedia.org/wiki/Correct_the_Record

The fact is, despite all the media whinging, it turns out it's incredibly difficult if not impossible to brainwash an entire nation of people into voting for someone they wouldn't normally vote for.


@drhodes, it's not impossible to say. https://www.vox.com/conversations/2017/1/27/14266228/donald-... https://www.nytimes.com/2018/01/02/health/fake-news-conserva... https://www.nytimes.com/2018/02/13/upshot/fake-news-and-bots...

At some point, an incredibly obstinate subset of the American population will need to realize that Hillary failed for very mundane reasons. At the end of the day, people did/do not like her. Blaming Russia is about the most complex answer you can come with as to why she lost, which should be the first clue that some critical thinking is in order.


[flagged]


Putin helped to elect Trump in the same way an ant riding an elephant remarks "Look at all the dust we are leaving behind". In the land of Citizens United and unlimited SuperPAC, the fake news factories barely register in the influence measurements.

Now, having kompromat in both criminal terms and personal embarassement is another story, but that is a failing on the USA. The 'checks and balances' allowed a President compromised by a foreign power to last at least 2 years.


How do you measure success?


Same ways you measure engagement with any other advertising campaign. Comments on promoted posts, number of likes, number of shares, number of clicks, etc....


> There is nothing "toxic" about the internet except for the centralization of communication on a handful of corporate controlled social networks that have perverse incentives to censor anything that might impact their advertising or other income.

Give me a break. There is PLENTY of toxicity online, just like there is in real life.

In fact the lack of accountability in the online world lets people get away with MORE toxicity than they would in the physical world.

Yes, centralized/privatized control of the spaces in which we congregate and interact online is also a big problem, but it's a completely different problem.

Solving one won't solve the other.


Dealing with physical "toxicity" is an order of magnitude more relevant than ensuring that online spaced are composed of doubleplusgood thoughts. With the definition of doubleplusgood being ever more amphorous and ever more transparent as a social climbing strategy...

I've been homeless enough that reading about "toxicity" is like reading about a toddler saying lime applesauce is too sour. There are physical realms where such concern would be far better directed. Where you could secure future safety and security for your child with far more impact!*

Your child is probably swalliwing little bits of plastic on a daily basis, is probably being fed diabetic inducing diets, is subject to a regime of hyper stimuli, etc... Toxicity is omnipresent but not because of the Russians.

* - via helping create the garden of neighborhoods and communities that are sane


I never said it was a zero sum game. But the discussion was about online spaces.

I agree with you 100% that solving real-world problems like the ones you describe is vitally important, it's just not the topic at hand.


> In fact the lack of accountability in the online world lets people get away with MORE toxicity than they would in the physical world.

Yes, that's a big part of it.

You guys are making me feel old today, citing Usenet and IRC, but here's another one from the past that makes a lot of sense:

"The social dynamics of the net are a direct consequence of the fact that nobody has yet developed a Remote Strangulation Protocol." -- Larry Wall.

That's why this is in the guidelines:

"Be civil. Don't say things you wouldn't say face-to-face. Don't be snarky. Comments should get more civil and substantive, not less, as a topic gets more divisive."

I humbly think that "don't write what you wouldn't say to someone's face" bit should be called Welton's Rule or something like that :-)


Toxicity is in the eye of the beholder. To the king of France, any talk of revolution is toxic. If the king of France could have censored everybody's speech with algorithmic precision, his descendant would still be king today.


Online interactions don't need to be censored. They're already regulated by the same laws that apply to normal offline interactions. If it isn't illegal, it should stay where it is without being censored.

Of course, any community can have an internal set of rules, but an universal filter that removes things automatically or doesn't let the user post them doesn't make any sense unless your goal is to censor unwanted speech.


It's really not that black and white.

You reduce this down to "censorship!" but really it's about creating and maintaining spaces where people are not feeling attacked or threatened.

Imagine you live in NYC and you're in Central Park with your spouse, minding your own business and trying to have a conversation and someone walks up and starts shouting obscenities and insults at you and your spouse non-stop, for no reason. You don't know this person, you can't hear each other anymore, and if you try to walk away they just follow you and keep at it.

(I'm using Central Park as a stand-in for whatever congregation space that people in a community will gravitate towards in order to feel connected to their community and participate in social interactions).

I don't think most mature or adults would consider this acceptable behaviour, and in fact in most civil societies this actually _is_ illegal (in Canada it's called causing a public disturbance).

But the above is basically the online experience for a lot of people (mostly women) today in large forums such as Twitter, Reddit and other places where a lot of our public discourse happens and where people want to join in and participate.

Yet there are no consequences for these bad actors online who are effectively doing the same thing. There is nothing legitimate or defensible about trolling, brigading, bullying or harassment online.

So spare me the "censorship" argument until you've got a suggestion about the "civility" problem first.


> it's about creating and maintaining spaces where people are not feeling attacked or threatened

People are different. You might feel attacked if I defend Muslims because you're Jewish, or you might be feel attacked because I defend Jewish people because you're Muslim.

Who decides who's right, Facebook? And if they stand by Muslims, do you not see a problem with not letting Jewish people express their ideas?

> Imagine you live in NYC and you're in Central Park with your spouse, minding your own business

No. I'm not in Central Park with my wife. We're talking about conversations on the internet.

> Yet there are no consequences for these bad actors online who are effectively doing the same thing. There is nothing legitimate or defensible about trolling, brigading, bullying or harassment online.

Who decides what's bullying, harassment, etc.?

> So spare me the "censorship" argument until you've got a suggestion about the "civility" problem first.

There is no solution. Online communities mirror real-life communities. Spare me your presence on the internet until you can handle talking to other people like an normal adult without getting triggered if someone says something that offends you.


> People are different. You might feel attacked if I defend Muslims because you're Jewish, or you might be feel attacked because I defend Jewish people because you're Muslim.

Maybe I wasn't clear (even though I'm pretty sure I was), I am talking about full-on uncivilized discourse like threats, harassment, verbal abuse and other such interactions that no reasonable person will consider acceptable regardless of the topic.

I am not talking about differences of opinion - no matter how controversial - in religion, doctrine, politics, race, etc..

> No. I'm not in Central Park with my wife. We're talking about conversations on the internet.

Again, my example was a person in a public space yelling in your face and not engaging in dialogue. It doesn't matter at that point what the topic is or what that person's views are, it's not acceptable behaviour.

I was using Central Park as a proxy for "the place people want to go to participate", because some people say "if you don't like the harassment then just leave" which then prevents people from participating in spaces where everyone else is and that's also not ok.

When trolls or harassers drop into your Twitter mentions or start brigading you on Reddit, it's not entirely unlike a stranger coming up to you uninvited and yelling in your face in the physical world

> Who decides what's bullying, harassment, etc.?

We all do, as a society.. And I call bullshit on the argument that if we can't all agree on what that is, we should do nothing while real harm is done.

Much of the trolling and harassment that happens online is dishonestly defended as "differences of opinion" or "free speech". And the opposite is certainly true, people also try to categorize legitimate criticism as harassment, but that's much less frequent.. And that also exists in the real world of course.

As I mentioned in my example, in the physical world we have defined "creating a public disturbance" as a crime. This is a vague description on purpose, and it is open to interpretation, and there is a defined process for resolving individual cases, but there is still a focus on first protecting the people being victimized, and then sorting it out afterwards. This is codified in a system of law and charters of rights, etc..

This is lacking in our online spaces, we allow harassment to persist while we endlessly debate what is acceptable and also what the owners of our online spaces should do about it.

And this does 100% tie into the point that our online social spaces are not public spaces, and are not even internationally recognized spaces. They are owned and operated by for-profit US corporations who have their own motives and will define "acceptable" in ways that do not align with civil society's views, that's for sure. I agree with you there, it's a complicated thing to fix that doesn't have much precedent, and there's no perfect solution in sight.

But I guess we disagree on what we should be doing about it.


> and this does 100% tie into the point that our online social spaces are not public spaces, and are not even internationally recognized spaces. They are owned and operated by for-profit US corporations who have their own motives and will define "acceptable" in ways that do not align with civil society's views, that's for sure. I agree with you there, it's a complicated thing to fix that doesn't have much precedent, and there's no perfect solution in sight.

I think Facebook and Twitter should now be treated like utilities like phones and electricity, meaning that they should be restricted in how much freedom they have since they gained so much power. Regardless, it doesn't make any sense to delegate these companies--who have been caught spying on users and doing crazy stuff--with deciding what we can say on the internet.

In addition, even if the intentions are good a censoring tool will always be abused. The only possible solution is mirroring USA's free speech: if not calling for immediate violence, it's allowed.


To your point about regulating these companies as utilities, it seems some political candidates are looking to do that:

https://medium.com/@teamwarren/heres-how-we-can-break-up-big...


> There is nothing "toxic" about the internet except for the centralization of communication on a handful of corporate controlled social networks that have perverse incentives to censor anything that might impact their advertising or other income.

Uh... there's plenty of toxicity on IRC, and there was plenty on Usenet too.

So no, I don't think that's really the answer.


Yeah, interestingly by far the least toxic place for me are Facebook and a personal Discord server where I know everyone in real life and I can curate who the people are.


Knowing people in real life is huge because it's a big disincentive to being a jerk to them even if you disagree.

One of the things I kind of enjoy about local politics where I live is that it's a "repeated game". It's a small enough town that the other actors are people you'll encounter day to day and have to get along with, so even if you vehemently disagree with them, you need to keep it civil.


You don't seem to understand how moderation on reddit actually works. The admins have very little direct impact on it outside of the standard enforcement of spam blocking and illegal content. It is your "solution" of a diversity of standards.

> There is nothing "toxic" about the internet

There are quite a lot of terrible and toxic things on the internet.

> Federation and peer to peer seem to be the only way forwards.

Yes, one would hope the entire internet ends up like Tor. A place where you can buy drugs, hire hitman, and find child porn.


> The admins have very little direct impact on it outside of the standard enforcement of spam blocking and illegal content.

/r/fatpeoplehate and /r/incels are admin-banned subs that weren't spam or illegal.


I like how you interpreted "very little" as none, thanks. Nothing of any value was lost anyways as both of those subs were shitty toxic places. And I am pretty sure they were banned for their mods supporting the subs being used to harass people, which violates TOS.


Another example is the reddit admins shutting down any subreddit that allows for discussion about digital rights managment. One prime example of this is r/printsf. If you try to discuss how to get your Amazon purchased ebook in a format that can be used with text to speech the local mods of the sub will ban you. This is because if they allow it the reddit admins will ban the sub.

It isn't all direct reddit admin/corporate action. They put the onus on the subs to do their dirty work and enforce the corporate rules to protect their income and interests.


So their effect is not little; it's at least as broad as illegal, spam, harassment, toxic, TOS violation.


Maybe the solution for centralization is decoupling the data architecture and the interface. Like in the net neutrality debate everyone understands that the hardware providing bandwidth is separate from the websites which run on them.

Lets take youtube for instance.

Imagine if you had the choice of 10 different services which allow you to upload videos on them. Once the video is uploaded they provide a key to retrieve it.

You can use this key now in any of 10 different video display sites to actually play the video to the end user.

The video display sites provide the monetization and discovery services. The upload sites provide the bandwidth.

This way you can migrate your videos whenever you want to a site which is more friendly to your type of content.


I think what superkuh means is that it's the opinion of many that there's no problem to solve. Okay, there may be toxic comments/content, but that's a feature of the Internet, not a bug. Any solution to rid the Internet of toxicity would be worse than the disease itself. If you want a clean information age, go back to your public library.


Social media is toxic because of its ability to distill extremism.

Say you have a person with an extremist view on dating and women - a 1 in 1,000,000 type view. Well, social media gives them a place where hundreds of like-minded individuals can gather in one spot and reinforce their extremism. Then, people with non-extremist views take notice of the community and stop by with popcorn because it's outrageous, disgusting, but above all, entertaining.

I think sometimes we overestimate how toxic a community is by doing bad math: "Oh, /r/incels has a subscription count of X and growing?? It must be converting more users to its extremism!! Quick, ban it!" What percentage of subs were just popcorn bystanders and what percentage of content creators were trolls? I would guess much higher than one would expect.


I don't see how popularity is relevant to a decision to content-based ban.


It shouldn't be but it's a factor of how many people are going to notice it and stir up bad publicity.


It's relevant to a discussion of free-speech absolutism in a public forum. Free speech as a principle is valuable and if the cost of millions of people being able to express their views is that one random Nazi gets to have his say, then I think most people would think that it's worth it. If that one Nazi were to turn into a thousand Nazis, then I suspect people would be more circumspect about the whole thing. So the question about how prevalent hate-speech is on Reddit has a lot to say about to what extent Reddit will be willing to censor its users.


> If that one Nazi were to turn into a thousand Nazis, then I suspect people would be more circumspect about the whole thing.

Why? I mean what do you care how many people say the earth is flat? As for Nazis, Antifia and whatever other group you care to use an example, it seems like we already have a descent set of laws in place to protect people from direct harm from those groups.

I'm sorry but I think what's going on at these social media companies and with the legacy media is blatant political censorship.


In my experience, especially on Reddit, toxic is defined as anything admins/moderators don’t agree with. Meanwhile other toxic behavior such as doxxing/harassment/raids are ignored, and done by the meta-subreddits such as subredditdrama and topmindsofreddit.


"According to the ranking service Alexa, the top three sites in the United States, as of this writing, are Google, YouTube, and Facebook. (Porn, somewhat hearteningly, doesn’t crack the top ten.) The rankings don’t reflect everything—the dark Web, the nouveau-riche recluses harvesting bitcoin—but, for the most part, people online go where you’d expect them to go."

The author is technically clueless. Nice swipe at Bitcoin, too bad it doesn't even make sense to compare Bitcoin "harvesting" with people going to a web site.

These are all the same lame arguments that have been made over and over. No, r/The_Donald doesn't represent the end of the world. Yes, Reddit had some subs that many people would find objectionable. The totally valid and correct argument is: Don't go visit a site when the content offends you.


>>an attempt to “self-investigate” claims that the restaurant’s basement was a dungeon full of kidnapped children. Comet Ping Pong does not have a basement.

Ah, but did you look for the secret passage under the prep station? The recent JRE episode with AJ was talking about how we have this mix of people on the Internet having fun with ideas, sometimes just to get a laugh, sometimes because they are just wondering out loud, and then this subpop of genuinely mentally disturbed people who take it all very literally and too seriously. That there's no filter between these groups is one of the dangers of the Internet, I guess.


How is that any different than the new yorker or traditional media spreading lies and hate about jussie smollet, covington or any other incident and causing mentally unstable people to attack others?

I find it strange how new yorker feels they should be allowed to be toxic but everyone else isn't. It's strange how they feel they aren't responsible for the mentally unstable they accidentally influence but everyone else is responsible.

Using that logic, every movie director, author, journalist, musician, etc would be liable for the actions of the mentally unstable. That would be the end of all media, including the new yorker.


A year late to this, but it's a fantastic and still relevant topic (content moderation)


So long as Reddit has very specific rules regarding what is forbidden, i.e. posting personally identifiable information, then I think the subreddit system is already as close to a perfect solution as I'd want. Subreddits are opt-in and opt-out, and are moderated with VASTLY different tolerances and tastes. I tend to prefer highly controversial conversations, at the expense of pleasantness, and there are subreddits that cater to this just fine. There are others whose sensibilities do not tolerate that, and there subreddits for that too.

What I cannot stand is people demanding that subreddits be removed. I filtered out posts from t/The_Donald immediately, and yet I still see people that cannot believe that subreddit still exists. The_Donald is just a highly moderated right wing echo chamber, not unlike r/LateStageCapitalism or equivalent left wing subs. I hate, and am banned, from most of them...but I'm happy they exist for those who subscribe.


just wondering what subreddits do you subscribe to? I enjoy talking about politics but r/politics is insufferable, I've found r/news to have much better discussions, but it all depends on the post and which type of people a specific headline will attract.


Define what's even toxic?

If you look into what is being removed from youtube, twitter, patreon, etc. It's not people calling for genocide that are being removed. Nobody would argue over those.

The 'toxicity' being removed seems to be political views.

Patreon is a perfect example of the system working correctly however. They started banning people over politics. The right-wing left patreon and went to subscriber-star. Patreon is now in money problems.

“Patreon needs to build new businesses and new services and new revenue lines in order to build a sustainable business,” said Patreon CEO Jack Conte.


(2018)


"Detoxify"? Reddit and the internet was fine until the establishment decided to take control of it and censor it. If people didn't like reddit's or the internet's "toxicity", they wouldn't use it. Nobody is forcing anyone to use reddit. If the puritans at the new yorker or the rest of the media don't like reddit, they simply don't have to use it. It would be like me complaining about the toxicity of the new yorker and demanding the new yorker be "detoxified". Or, I could simply not consume the new yorker's content.

The new yorker along with the rest of the media uses the same excuse that china, russia and others use to censor. I guess "detoxify" sounds better than "clean up".

https://news.sky.com/story/china-cracks-down-in-clean-up-of-...

All this "detoxify" or "clean up" internet nonsense is just cover to censor and control the masses.


> Reddit and the internet was fine until

You need to spend some more time with the Usenet archives.


What about it? What you find "toxic", others might not find "toxic". That's my point. And if there is "toxicity" in usenet, don't use it. Or use the newsgroups that you find less "toxic".

But you are missing my point. My point isn't about "toxicity", it's about censorship. "Toxicity" is just an excuse to censor and control.

Usenet had been around for decades and has always been "toxic". So what? Did the world end?


Because normal people don't want to frequent toxic environments. It's unpleasant.


Is that why reddit is so popular? It was "toxic" and yet it grew and normal people used it in droves. Your argument contradicts itself. Most people don't mind reddit or usenet or the "toxicity". It's a handful of busybodies with power trying justify censorship. No different than the church ladies whining about "toxic" rap or "toxic" tv or "toxic" whatever when they themselves are toxic too.

And who defines normal? The saudis? The chinese? The catholic church? The US government? The media? Or do you get to decide?

If you find it unpleasant, then don't use it. If you don't like baseball, then don't watch it. Go watch something else. Everytime we as a society get beyond the puritanical blasphemy craze, it rears its ugly head again.


> Nobody is forcing anyone to use reddit

When the public forum is demolished, and discourse flees to corporate platforms, we no longer have a meaningful choice.


[flagged]


It's strange the among the author's top concerns is consolidation in the porn web space (ignoring that the vast range of porn sites are mostly owned by the same company), and whether people have more pageviews on porn vs YouTube.


Another leftist publication that advocates censoring unwanted speech.

It's only a matter of time before "hate speech" will get you automatically deplatformed. It's going to be fun seeing how what's considered "hate speech" evolves.


[flagged]


Well, there is. Silicon Valley companies are or act as far-left.

Facebook and Twitter in particular are known to censor conservatives.

I actually don't know if they do it because these companies are in traditionally left-wing California, or because of fear of activists.


> Well, there is. Silicon Valley companies are or act as far-left.

They are for-profit entities that leach on the commons (user-generated content) to extract rents. That is the opposite of far left. If you mean Stalinist dictatorships, I can't disagree.


Facebook and Twitter are known to censor leftists. They're carefully expressing Neoliberalism and, as the sibling comment suggests, ultimately nihilistic, profit-driven ventures.

Anyone on the actual left, e.g. labor organizers, is highly critical of a tech industry we see as paying Liberal lip service while implementing as much Authoritarianism an they can get away with. See:

1. gig work wage theft,

2. Tinder & gender,

3. Facebook and privacy,

4. Salesforce and ICE,

5. Palantir and pre-crime


IMO there's very little on the internet that's significantly more toxic than Hollywood movies and pop music. Porn, I guess.

If you do find it: congrats, you must be pretty inventive.

On the extremist content: those people are not well versed. If you reaaaally want to you can let it brainwash you, but at that point a book could do the job as well.

And: internet extremists don't have significant attention spans and can barely act in the world. If they learned how to read, well... they'd soon know better.

Are "book extremists" still a thing?


Too cynic? IMO the topic demands a huge grain of salt, but perhaps I was otherwise unproductive.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: