There is no single solution for content moderation that is good. That is why giant communities controlled by a single entity with a single standard are inherently toxic themselves. The solutions are innumberable and should be. That's how the gordian knot of moderation is cut. Not one set of standards but a diversity of them.
Federation and peer to peer seem to be the only way forwards. Unfortunately most users of the big sites are there for monetary reasons and these reason don't translate to smaller sites. So they'll stick around until the censorship gets so tight the money dries up (ie, what google/youtube is doing now).
There are definitely other problems besides a centralized internet. Consider attacks by corporate and state level enemies in the form of astroturfing and fake news. Those issues existed outside of a centralized internet, and arguably can be addressed better by centralized access to information.
Look at the great firewall of China: do you think the Russian disinformation campaign could succeed there to the extent it did in the west? Free access to the internet lowers the barrier for bad actors. Having said that, I obviously prefer the western model of internet access, but I'd like to state that it doesn't come without its own set of problems.
What proof do you have that the Russians had a successful disinformation campaign in the west? I've seen none.
To say it wasn't successful is just being obtuse.
And then there's the DNI report, which if you actually read, paints a pretty lame picture of what actually happened, and in no way supports the idea that the election was impacted.
This article is from within days of the one you posted:
It covers the other tactics used and how they were successful—namely false identities.
The US intelligence community saw it as enough of a threat going forward they acted for the 2018 midterm elections:
The identities used were things like sockpuppet accounts and fake communities set up to look as if there was more virulent discord already organically occurring and dragging legitimate online communities and identities into the same discord.
Here in Canada CSIS has put out warnings for Canadians leading up to the federal election later this year about that very thing. They said it would happen, and now Canadian online communities are devolving into trolling and anger and vitriol like there never was.
The main thrust of the Wired article is that there were different tactics, but they had different goals. The article is saying almost explicitly that these other tactics didn't (and weren't designed to) change people's minds (e.g. sway an election). They were designed to foment conflict between groups already opposed, e.g. orchestrating protests for two opposing groups at the same street corner, turning opposing thought into violent action. To what degree this was organic after 2016 vs catalyzed by Russia is a great question to dig into, but it shouldn't be conflated with tipping a vote. Being that I'm rather condemnatory of any of these protest movements as futile and counter-productive shit-stirring or worse, I'm not terribly surprised about or sympathetic towards the "victims".
Edit: quoted some silliness from Wired article
I don't know much about Ms. McKew as a writer other than she was present during the Russia-Georgia problems and would have been witness first-hand to Russia's prototyping of these kinds of tactics, as well as she's written for other reputable outlets like the Washington Post and was interviewed for Harvard Political Review. Couple that with not only our intelligence agencies, or even the five eyes, but other national agencies like the AIVD (Netherlands) have all confirmed that this is happening. They aren't exactly green. And while I would be remiss to just dismiss a university study, I don't know who the researchers were, their methods, or their scope. If it's only about the fake news items, that stuff was almost passé by the time these other tactics really took hold.
If you take a look at the CBC article and the timing of the attacks (for lack of a better word), they aligned quite tidily with a continuing stream of anti-immigrant conversations online to the point where those vitriolic approaches to the subject of immigration were wrapped up in the yellow-vest marches here.
 She's also a particularly prominant subject of attack for various [outspokenly] right-wing outlets and pro-Russian sources. It doesn't take much of a search to find that.
I'm only trying to compartmentalize these two phenomenon so they can be talked about separately, because there certainly is a narrative out there that the election was illegitimate because of election interference from Russia, and none of the evidence supports that. In fact, the DNI report explicitly disavows that idea. It is pursuant to the goal of fomenting institutional destabilization to irresponsibly push that narrative or allow a conversation about Russian meddling to implicitly support that narrative.
I'm not trying to minimize the concern we should have over a foreign government being an agent provocateur for civil disturbance. I'm just saying that it's contrary to the evidence to think that these actions swayed the election.
If you want to see one (non-scientific) example of media affecting someone's political outlook, I recommend https://en.wikipedia.org/wiki/The_Brainwashing_of_My_Dad
It's from before all of this and mainly focuses on Fox News'/talk radio style of vitriolic, non-factual reporting. Similar tactics were employed and amplified. I'd be surprised if they had no effect with the amount of noise that has gone up. I've personally seen it in others as well, but I understand that adds little to your search.
Intent =/= Actual Effectiveness
The current debt regime is doing a fantastic job at alienating a generation from the status quo. If Russia paid to piss in a Great Lake of piss, that doesn't mean the Great Lake of piss was created or even substantially altered by Russia.
What Russia supposedly "did" to us did not even register as a blip. Russia's biggest crime in this case was coming from a place that Americans are remarkably naive and ignorant about.
But seeing the sources above I might have been wrong about that. Still they certainly tried to be effective.
The fact is, despite all the media whinging, it turns out it's incredibly difficult if not impossible to brainwash an entire nation of people into voting for someone they wouldn't normally vote for.
At some point, an incredibly obstinate subset of the American population will need to realize that Hillary failed for very mundane reasons. At the end of the day, people did/do not like her. Blaming Russia is about the most complex answer you can come with as to why she lost, which should be the first clue that some critical thinking is in order.
Now, having kompromat in both criminal terms and personal embarassement is another story, but that is a failing on the USA. The 'checks and balances' allowed a President compromised by a foreign power to last at least 2 years.
Give me a break. There is PLENTY of toxicity online, just like there is in real life.
In fact the lack of accountability in the online world lets people get away with MORE toxicity than they would in the physical world.
Yes, centralized/privatized control of the spaces in which we congregate and interact online is also a big problem, but it's a completely different problem.
Solving one won't solve the other.
I've been homeless enough that reading about "toxicity" is like reading about a toddler saying lime applesauce is too sour. There are physical realms where such concern would be far better directed. Where you could secure future safety and security for your child with far more impact!*
Your child is probably swalliwing little bits of plastic on a daily basis, is probably being fed diabetic inducing diets, is subject to a regime of hyper stimuli, etc... Toxicity is omnipresent but not because of the Russians.
* - via helping create the garden of neighborhoods and communities that are sane
I agree with you 100% that solving real-world problems like the ones you describe is vitally important, it's just not the topic at hand.
Yes, that's a big part of it.
You guys are making me feel old today, citing Usenet and IRC, but here's another one from the past that makes a lot of sense:
"The social dynamics of the net are a direct consequence of the fact that nobody has yet developed a Remote Strangulation Protocol." -- Larry Wall.
That's why this is in the guidelines:
"Be civil. Don't say things you wouldn't say face-to-face. Don't be snarky. Comments should get more civil and substantive, not less, as a topic gets more divisive."
I humbly think that "don't write what you wouldn't say to someone's face" bit should be called Welton's Rule or something like that :-)
Of course, any community can have an internal set of rules, but an universal filter that removes things automatically or doesn't let the user post them doesn't make any sense unless your goal is to censor unwanted speech.
You reduce this down to "censorship!" but really it's about creating and maintaining spaces where people are not feeling attacked or threatened.
Imagine you live in NYC and you're in Central Park with your spouse, minding your own business and trying to have a conversation and someone walks up and starts shouting obscenities and insults at you and your spouse non-stop, for no reason. You don't know this person, you can't hear each other anymore, and if you try to walk away they just follow you and keep at it.
(I'm using Central Park as a stand-in for whatever congregation space that people in a community will gravitate towards in order to feel connected to their community and participate in social interactions).
I don't think most mature or adults would consider this acceptable behaviour, and in fact in most civil societies this actually _is_ illegal (in Canada it's called causing a public disturbance).
But the above is basically the online experience for a lot of people (mostly women) today in large forums such as Twitter, Reddit and other places where a lot of our public discourse happens and where people want to join in and participate.
Yet there are no consequences for these bad actors online who are effectively doing the same thing. There is nothing legitimate or defensible about trolling, brigading, bullying or harassment online.
So spare me the "censorship" argument until you've got a suggestion about the "civility" problem first.
People are different. You might feel attacked if I defend Muslims because you're Jewish, or you might be feel attacked because I defend Jewish people because you're Muslim.
Who decides who's right, Facebook? And if they stand by Muslims, do you not see a problem with not letting Jewish people express their ideas?
> Imagine you live in NYC and you're in Central Park with your spouse, minding your own business
No. I'm not in Central Park with my wife. We're talking about conversations on the internet.
> Yet there are no consequences for these bad actors online who are effectively doing the same thing. There is nothing legitimate or defensible about trolling, brigading, bullying or harassment online.
Who decides what's bullying, harassment, etc.?
> So spare me the "censorship" argument until you've got a suggestion about the "civility" problem first.
There is no solution. Online communities mirror real-life communities. Spare me your presence on the internet until you can handle talking to other people like an normal adult without getting triggered if someone says something that offends you.
Maybe I wasn't clear (even though I'm pretty sure I was), I am talking about full-on uncivilized discourse like threats, harassment, verbal abuse and other such interactions that no reasonable person will consider acceptable regardless of the topic.
I am not talking about differences of opinion - no matter how controversial - in religion, doctrine, politics, race, etc..
> No. I'm not in Central Park with my wife. We're talking about conversations on the internet.
Again, my example was a person in a public space yelling in your face and not engaging in dialogue. It doesn't matter at that point what the topic is or what that person's views are, it's not acceptable behaviour.
I was using Central Park as a proxy for "the place people want to go to participate", because some people say "if you don't like the harassment then just leave" which then prevents people from participating in spaces where everyone else is and that's also not ok.
When trolls or harassers drop into your Twitter mentions or start brigading you on Reddit, it's not entirely unlike a stranger coming up to you uninvited and yelling in your face in the physical world
> Who decides what's bullying, harassment, etc.?
We all do, as a society.. And I call bullshit on the argument that if we can't all agree on what that is, we should do nothing while real harm is done.
Much of the trolling and harassment that happens online is dishonestly defended as "differences of opinion" or "free speech". And the opposite is certainly true, people also try to categorize legitimate criticism as harassment, but that's much less frequent.. And that also exists in the real world of course.
As I mentioned in my example, in the physical world we have defined "creating a public disturbance" as a crime. This is a vague description on purpose, and it is open to interpretation, and there is a defined process for resolving individual cases, but there is still a focus on first protecting the people being victimized, and then sorting it out afterwards. This is codified in a system of law and charters of rights, etc..
This is lacking in our online spaces, we allow harassment to persist while we endlessly debate what is acceptable and also what the owners of our online spaces should do about it.
And this does 100% tie into the point that our online social spaces are not public spaces, and are not even internationally recognized spaces. They are owned and operated by for-profit US corporations who have their own motives and will define "acceptable" in ways that do not align with civil society's views, that's for sure. I agree with you there, it's a complicated thing to fix that doesn't have much precedent, and there's no perfect solution in sight.
But I guess we disagree on what we should be doing about it.
I think Facebook and Twitter should now be treated like utilities like phones and electricity, meaning that they should be restricted in how much freedom they have since they gained so much power. Regardless, it doesn't make any sense to delegate these companies--who have been caught spying on users and doing crazy stuff--with deciding what we can say on the internet.
In addition, even if the intentions are good a censoring tool will always be abused. The only possible solution is mirroring USA's free speech: if not calling for immediate violence, it's allowed.
Uh... there's plenty of toxicity on IRC, and there was plenty on Usenet too.
So no, I don't think that's really the answer.
One of the things I kind of enjoy about local politics where I live is that it's a "repeated game". It's a small enough town that the other actors are people you'll encounter day to day and have to get along with, so even if you vehemently disagree with them, you need to keep it civil.
> There is nothing "toxic" about the internet
There are quite a lot of terrible and toxic things on the internet.
> Federation and peer to peer seem to be the only way forwards.
Yes, one would hope the entire internet ends up like Tor. A place where you can buy drugs, hire hitman, and find child porn.
/r/fatpeoplehate and /r/incels are admin-banned subs that weren't spam or illegal.
It isn't all direct reddit admin/corporate action. They put the onus on the subs to do their dirty work and enforce the corporate rules to protect their income and interests.
Lets take youtube for instance.
Imagine if you had the choice of 10 different services which allow you to upload videos on them. Once the video is uploaded they provide a key to retrieve it.
You can use this key now in any of 10 different video display sites to actually play the video to the end user.
The video display sites provide the monetization and discovery services. The upload sites provide the bandwidth.
This way you can migrate your videos whenever you want to a site which is more friendly to your type of content.
Say you have a person with an extremist view on dating and women - a 1 in 1,000,000 type view. Well, social media gives them a place where hundreds of like-minded individuals can gather in one spot and reinforce their extremism. Then, people with non-extremist views take notice of the community and stop by with popcorn because it's outrageous, disgusting, but above all, entertaining.
I think sometimes we overestimate how toxic a community is by doing bad math: "Oh, /r/incels has a subscription count of X and growing?? It must be converting more users to its extremism!! Quick, ban it!" What percentage of subs were just popcorn bystanders and what percentage of content creators were trolls? I would guess much higher than one would expect.
Why? I mean what do you care how many people say the earth is flat? As for Nazis, Antifia and whatever other group you care to use an example, it seems like we already have a descent set of laws in place to protect people from direct harm from those groups.
I'm sorry but I think what's going on at these social media companies and with the legacy media is blatant political censorship.
The author is technically clueless. Nice swipe at Bitcoin, too bad it doesn't even make sense to compare Bitcoin "harvesting" with people going to a web site.
These are all the same lame arguments that have been made over and over. No, r/The_Donald doesn't represent the end of the world. Yes, Reddit had some subs that many people would find objectionable. The totally valid and correct argument is: Don't go visit a site when the content offends you.
Ah, but did you look for the secret passage under the prep station? The recent JRE episode with AJ was talking about how we have this mix of people on the Internet having fun with ideas, sometimes just to get a laugh, sometimes because they are just wondering out loud, and then this subpop of genuinely mentally disturbed people who take it all very literally and too seriously. That there's no filter between these groups is one of the dangers of the Internet, I guess.
I find it strange how new yorker feels they should be allowed to be toxic but everyone else isn't. It's strange how they feel they aren't responsible for the mentally unstable they accidentally influence but everyone else is responsible.
Using that logic, every movie director, author, journalist, musician, etc would be liable for the actions of the mentally unstable. That would be the end of all media, including the new yorker.
What I cannot stand is people demanding that subreddits be removed. I filtered out posts from t/The_Donald immediately, and yet I still see people that cannot believe that subreddit still exists. The_Donald is just a highly moderated right wing echo chamber, not unlike r/LateStageCapitalism or equivalent left wing subs. I hate, and am banned, from most of them...but I'm happy they exist for those who subscribe.
If you look into what is being removed from youtube, twitter, patreon, etc. It's not people calling for genocide that are being removed. Nobody would argue over those.
The 'toxicity' being removed seems to be political views.
Patreon is a perfect example of the system working correctly however. They started banning people over politics. The right-wing left patreon and went to subscriber-star. Patreon is now in money problems.
“Patreon needs to build new businesses and new services and new revenue lines in order to build a sustainable business,” said Patreon CEO Jack Conte.
The new yorker along with the rest of the media uses the same excuse that china, russia and others use to censor. I guess "detoxify" sounds better than "clean up".
All this "detoxify" or "clean up" internet nonsense is just cover to censor and control the masses.
You need to spend some more time with the Usenet archives.
But you are missing my point. My point isn't about "toxicity", it's about censorship. "Toxicity" is just an excuse to censor and control.
Usenet had been around for decades and has always been "toxic". So what? Did the world end?
And who defines normal? The saudis? The chinese? The catholic church? The US government? The media? Or do you get to decide?
If you find it unpleasant, then don't use it. If you don't like baseball, then don't watch it. Go watch something else. Everytime we as a society get beyond the puritanical blasphemy craze, it rears its ugly head again.
When the public forum is demolished, and discourse flees to corporate platforms, we no longer have a meaningful choice.
It's only a matter of time before "hate speech" will get you automatically deplatformed. It's going to be fun seeing how what's considered "hate speech" evolves.
Facebook and Twitter in particular are known to censor conservatives.
I actually don't know if they do it because these companies are in traditionally left-wing California, or because of fear of activists.
They are for-profit entities that leach on the commons (user-generated content) to extract rents. That is the opposite of far left.
If you mean Stalinist dictatorships, I can't disagree.
Anyone on the actual left, e.g. labor organizers, is highly critical of a tech industry we see as paying Liberal lip service while implementing as much Authoritarianism an they can get away with. See:
1. gig work wage theft,
2. Tinder & gender,
3. Facebook and privacy,
4. Salesforce and ICE,
5. Palantir and pre-crime
If you do find it: congrats, you must be pretty inventive.
On the extremist content: those people are not well versed. If you reaaaally want to you can let it brainwash you, but at that point a book could do the job as well.
And: internet extremists don't have significant attention spans and can barely act in the world. If they learned how to read, well... they'd soon know better.
Are "book extremists" still a thing?