Hacker News new | past | comments | ask | show | jobs | submit login
Facebook to ban white nationalist content (fb.com)
890 points by anigbrowl 27 days ago | hide | past | web | favorite | 1525 comments



At what point do such things change into a public discourse problem?

Do we wait until it can be shown that companies such as Facebook, Twitter, Google, etc. influenced elections because they refused to serve information from a candidate they didn't like? If we aren't already there, it's not long before it is.

What do you think would happen if these "private companies" with such deep hooks into our communication infrastructure suddenly decided to remove all data associated to the Republican Party? For that matter, the Democratic Party?

It seems to me that Facebook and Twitter are trying to have it both ways. They can choose to police the content provided by their users but can't be held responsible for said content? Are they a publisher or a platform?

I don't think comparing old thinking based around old methods of communication compares to what we have today, it requires new thinking. These aren't like newspapers sold by kids on the corner in a city that can have dozens of newspapers countering each other. Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas.

Tim Pool is right, at this rate, sooner or later, the Feds will come knocking and will shut that party down.


> What do you think would happen if these "private companies" with such deep hooks into our communication infrastructure suddenly decided to remove all data associated to the Republican Party? For that matter, the Democratic Party?

They wouldn't do that, because there would be justified public outcry. Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.


I can give you hundreds of examples of people being accused of being a white nationalist without any merit to it. To such a degree that this word almost lost its meaning.

edit: Tim Pool for example was already accused. It will clearly be used to shut down unwelcome dissent.


Before we make this into 'conservatives being silenced', it's worth pointing out that the censorship problem also exists for leftists outlets that are left of the neoliberal centrist view of the D.C. Democratic Party.


Taking a stand against content moderation was a fundamental leftist position some years ago...

That said, I absolutely agree that leftist positions do get censored. Even if I currently like to underline my dissociation with its authoritarian excesses and general dishonesty on certain subjects, the problem is a general one.


Unrestricted free speech was always a strange issue. Hitchens' very eloquently presented how control of speech was used to suppress a lot of groups, and he knew about the dangers of radicalization, incitement, hate speech, but considered those negatives acceptable.

However not everyone thinks that it's so great.

I don't know, and it's probably not something we can just settle easily.

It's usually true, that a society cannot simply rely on "laws" to save it, and when the power imbalance get too extreme, then laws won't save anyone anyway. Though it seems having some kind of policy and publicly agreed way to stop serial inciters is not a bad idea. After all information censorship generally happens through claims of potential national security hazards and other gag orders, not by a too wide interpretation of hate speech laws. (But of course human creativity is pretty limitless when it comes to suppressing others.)


> the censorship problem also exists for leftists outlets that are left of the neoliberal centrist view of the D.C. Democratic Party.

Which ones, specifically?


such as...


Jimmy Dore, Max Blumenthal, Rania Khalek, Aaron Maté, Michael Tracey, Abby Martin, Matt Taibbi, Katie Halper, Ben Norton, Lee Camp, Ilhan Omar, Mark Ames etc.

There are many, just some off the top of my head.


I'm only familiar with a couple of those names.

Who/what de-platformed Jimmy Dore (here's his YouTube channel; [1])? Or the rest?

[1] https://www.youtube.com/channel/UC3M7l8ved_rYQ45AVzS0RGA


He's been smeared in CNN & Washington Post articles as a crank and conspiracy theorist who runs 'an extremist channel' who should face consequences, has videos constantly demonetized and de-ranked etc.

His channel is not 'deleted', (as is the case with the majority of right-wing commentators as well btw), but it is very much 'shadowbanned'/blacklisted, (economic ruin).

I for example regularly notice the recommendation algorithm skews a lot more towards the centre & right actually even when I am specifically looking for Jimmy Dore videos.

People like Rania Khalek, Abby Martin... were even more explicitly interrogated by the likes of CNN, had their Facebook pages taken down etc.

On Twitter, a lot of them don't even come up in the search results, effectively shadowbanned.

Of course, none of them would be invited on mainstream TV because their viewpoint is not allowed in 'polite circles'.

This is why I have a huge issue with the right simplistically equating D.C. Democrats with 'the left'. I suppose the division is as to what do you care about, social leftism, (identity politics), which is easy & lazy and what many on the right focus on vs economic leftism, which is what many of those I named discuss and is not really allowed on mainstream media.


For me, Twitter's results for "Dore" show Jimmy Dore as the #3 result [1], following two handles with many more followers.

Contrast that to "Limbaugh", for which Rush Limbaugh is the (buried) #11 result, despite having 2x the followers anything else.

[1] https://twitter.com/search?q=Dore&src=typed_query

[2] https://twitter.com/search?q=limbaugh&src=typed_query


He's the very first Twitter account suggested when I type 'limbaugh' into the search [1].

In Moments, there's a lot of other people talking about him, so that's what's shown, (also a lot more notable Limbaughs, as that's a pretty common surname), whereas Dore is pretty much just him, not that many other people are talking. That's how Moments always worked, nothing shady there. It's also true if you put i.e. 'Taylor Swift' in Moments, her actual Twitter account is fairly buried, because there's lots of other buzz that the algo deemed more relevant.

If anything, it speaks to his popularity.

And that's despite him not having a verified account, which are deranked for everyone over verified ones. Type in ie 'Sean Hannity' and you'd see his verified account right up.

Perhaps it's time to admit that the simplistic narrative of 'left censoring the right' is not really true and it's more complex than that. It's really the establishment silencing alternative voices.

The right is more than happy to censor the left on BDS, for example, (with the help of the Democrats even(!)) & cement that into law. I don't see any of the right-wing 'free speech worriers', like Ben Shapiro talk about how wrong that is. In fact they very much support it.

On the other hand, you have left-wing channels like Dore & Secular Talk constantly bring up how it's wrong to censor right-wing voices, even doing long rants on specific cases.

1 - https://imgur.com/a/UfCf9vF


> His channel is not 'deleted', (as is the case with the majority of right-wing commentators as well btw), but it is very much 'shadowbanned'/blacklisted, (economic ruin).

And, as we all know, nobody ever made money off of extremist political commentary before YouTube, right kids?

FFS he has his own website and show. Just because YouTube doesn't give him money doesn't mean he's persecuted. He can feel free to host his videos elsewhere.

> (economic ruin).

Hyperbole alert!

> People like Rania Khalek, Abby Martin... were even more explicitly interrogated by the likes of CNN, had their Facebook pages taken down etc.

Please cite some sources. I'm not really finding any info.


Ok but have any of these falsely accused actually had their account removed?


A while back, my leftist circles kept repeating ad nauseam that Peter Thiel (who is already in a minority, being gay) is a white nationalist. I suspected bullshit and decided to look into it.

All I could find was this: Peter Thiel once spoke at a libertarian conference. Libertarians, being libertarians, permit racism (read: not the same as "support racism"). THEREFORE, PETER THIEL IS A WHITE NATIONALIST.

Are you fucking serious? I pushed back on my leftist friends with what I found. "Duhhhhh, errrrrr, ummmmm, well... everyone just knows he is!" Oh, really.

Bullshit, repeated often enough as truth, becomes evidence for itself.


Honest question, why is this getting downvoted?


Because it's a straw man argument. GP (you) is talking about some argument their friends made, not something anyone in this thread has said.


But it was a related example of how echo chambers can corrupt the truth


> I can give you hundreds of examples of people being accused of being a white nationalist without any merit to it.

Feel free to do so whenever. Be sure to indicate when that has resulted in serious consequences for the "victim".

> Tim Pool for example was already accused.

TBF Tim Pool pulled the whole "I'm so neutral" act when dealing with a bunch of actual white supremacists. What "dissent" was unwelcome there? That he was pretending like these were totally fine people who just happened to think that genocide was okay?


[flagged]


> just out of interest... such as?

Enjoy!

https://www.youtube.com/watch?v=NQF2-F-GG_o

And here he is with a bunch of others!

https://i.imgur.com/SXqc8PX.jpg

> You could be really helpful here, sugar.

And you could tone down the condescension, sweetie.


I don’t know the facts behind either argument, but now a minimum-wage content moderator has to make this decision 1000 times a day.

Good luck, kid.


By definition, this arrangement favors viewpoints that generate more public outcry, which sets up any marginalized group for being steamrolled. Do you seriously think that's going to be restricted to white nationalism? I mean, there's already plenty of other stuff that Facebook is censoring that the same people who are cheering these news have pushed back against before.

https://www.aclu.org/blog/free-speech/internet-speech/facebo...


...and yet white nationalists are still free to set their own social network up or use networks that don't care, like Voat for example.

That's called freedom of association: the right for an organization or individual to not wish to associate with someone.


What happens when all ISPs in the country combine forces to block Voat, even though there's no law banning it?

https://en.wikipedia.org/wiki/Internet_censorship_in_Austral...

Are you going to suggest that people start their own ISP?

At some point, private actors can carry so much economic power that their private rules effectively become laws. Much of Jim Crow was implemented in this manner, in addition to actual legislation.


ISPs as a group blocking IP-level access to portions of the web is far and above a different thing than a private company or a group of private companies refusing to host content on their servers that they do not want to host. Your attempt to conflate the two is moving the goalposts and arguing a slippery slope.

But even then, VPN or Tor? The internet is built to route around damage. Has Mastadon, IRC, or ICQ ever been blocked?

Again, just because white nationalists don't have the platforms they want doesn't mean they can't get their message out. But what they want is mainstream acceptance, and that is most certainly not going to happen.


The same Voat that just got banned in New Zealand for hosting stuff the government didn't approve of whilst Facebook, which also hosted it, was left alone?

That Voat?

Somehow I'm skeptical this is a viable approach short of Torrifying the entire internet.


> The same Voat that just got banned in New Zealand for hosting stuff the government didn't approve of whilst Facebook, which also hosted it, was left alone?

Take it up with New Zealand, which has very strong laws about hate speech and promoting extremist communities.

> whilst Facebook, which also hosted it, was left alone?

That content makes up a microscopic amount of the content on Facebook, and it was removed when reported. That content makes up the vast majority of the traffic on Voat, however.

It seems like you're moving the goalposts here, though. We're not talking about governments banning websites, we're talking about the government forcing websites to host content that they don't want to host. That's what's at play here.

And the fact is that you are free to start a public or Tor-based community of your own, unless you're in countries where Tor is blocked, in which case I think you're in far deeper shit than this discussion is focused on.


So this problem probably requires some fragile and "temporary" complex solution.

Letting paranoid xenophobes roam, recruit and incite violence on Facebook is bad. And its ill effects are already felt, and it has significant potential for doing a lot more harm in the future.

Whereas allowing a quasi-public-forum to be controlled and basically censored by an unaccountable entity (FB) is problematic - especially if said control is co-opted by the very same ideology that in the first place that particular control mechanism was supposed to, well, control.

It feels like the problem is that hate speech and other kinds of populist nonsense is currently the tool that easily leads to more and more authoritarianism, more and more xenophobes and isolationists getting into power. But censorship itself is also a great tool for power consolidation.


> They wouldn't do that, because there would be justified public outcry

Where would the outcry happen?

There aren't that many public platforms with any reach, and they're all adopting similar policies.

In [blocked] no one can hear you scream!


You have misused a slippery slope argument to turn the other position into a strawman. There are risks to this FB ban, but silencing outcry about the ban is not one of them.

In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make. For example, Germany bans racist speech, but does not ban speech about whether racist speech ban should continue (and this particular debate is definitely alive and well in Germany). Even in the US we've banned certain words from broadcast TV for many years, but this never limited people from discussing whether the ban should continue!


>In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make

And equally easy to abuse, something which has happened time and again:

Complain about the "ban of X"? Discussion shut down as supporting X.


Germany and the US are constitutional republics with independent judiciaries who adjudicates these things.

At FB, Twitter, Snapchat etc, whoever happens to work in the subject banning division arbitrarily makes these decisions, unless overridden by top management.


> In practice, the distinction between banning a subject and banning a conversation about a subject ban, is easy to make

Apparently not for Facebook’s moderators/censors workforce. Just they other day: https://mobile.twitter.com/OzraeliAvi/status/111040092879067... The day before that they banned a local satirical comics and this keeps popping up regularly.


> There aren't that many public platforms with any reach

Okay? Private organizations are under no obligation to provide you with a platform to spread your message.

If you want, you can start your own social network or host your own blog.


This is a great argument against things I didn't say.


Then perhaps if you could be a little clearer about what you are saying, we could have a discussion about that.


> They wouldn't do that, because there would be justified public outcry.

This is an argument for mob rule. I'm not sure that's as comforting as you intended.


>They wouldn't do that, because there would be justified public outcry.

Somehow saying "yes, these multinational corporation could exert undue influence over a political system, but they just wouldn't" does not seem sufficient. I feel that such an attitude is like saying "The US government would not spy on its on citizens -- they wouldn't do that, just imagine the public outcry!" Perhaps that is a bad analogy, but the issue here is that we are nearing the point where "oh, they wouldn't do such a thing" becomes untenable.

The CFO of Google said, in the leaked video[1] of the TGIF immediately following Trump's election, that they would use "the great strength and resources and reach we have to continue to advance really important values." Going by the reactions of everyone in that meeting, their efforts are certainly not impartial or apolitical.

[1] https://www.youtube.com/watch?v=FRf9UxsM-NE


What's the alternative? Governments dictating what kind of speech private communications platforms can and cannot allow? Is that better?


> They wouldn't do that, because there would be justified public outcry.

If Facebook and Twitter suddenly decided to de-platform and ban any discussion praising, defending, or favoring a political party, where would the outcry happen? On Facebook? Sorry, it’s banned!


Traditional media, competing social media, forums, blogs, posters, demonstrations, sticker campaigns.

Journalists would be absolutely salivating at the thought of writing about Facebook's new policy, especially if it was that blatant.

FB and Twitter are not the entire world. It's good praxis to be involved with people in the real world.


No they wouldn't, because this already happens and you hear nothing.

For instance in the UK Facebook banned the pages of a political party and very little was said because the same kinds of people who control Facebook also control the mainstream media, so they're almost always in agreement, except that journalists would really love Facebook to ban even more stuff to achieve the 'right outcomes', as they see it.

https://www.pcmag.com/news/359847/facebook-bans-far-right-uk...


> the same kinds of people who control Facebook also control the mainstream media

There's that conspiracy language again!

And then you go on to link a piece of "mainstream media" that writes extensively about it! Did you mean to prove yourself wrong?


PC Mag is mainstream media, now? How many readers do you think it has compared to a national newspaper?

It's hardly a conspiracy - the worldviews of these people are formed in the same crucibles and result in the same outcomes. They want to manipulate the narrative to ensure the right outcomes, in their view.


> PC Mag is mainstream media, now? How many readers do you think it has compared to a national newspaper?

You're comparing apples to oranges. What's it's ranking among tech/PC websites?

How about the Guardian?

https://www.theguardian.com/world/2018/mar/14/facebook-bans-...

How about BBC?

https://www.bbc.com/news/technology-46746601

https://www.bbc.com/news/technology-43398417

How about Wired?

https://www.wired.co.uk/article/facebook-britain-first-far-r...

Is that mainstream enough for you?

> They want to manipulate the narrative to ensure the right outcomes, in their view.

Prove it.


I said very little was said, not literally nothing. We see articles in these outlets decrying tech firms for not doing enough to combat 'extremism' nearly every day. How often do we see mention of political parties being banned? It's not discussed anywhere near as much.

Look, the original comment I was taking issue with said this:

"Journalists would be absolutely salivating at the thought of writing about Facebook's new policy [of de-platforming and banning any discussion praising, defending, or favoring a political party], especially if it was that blatant."

That clearly isn't the case because it's happened already and journalists didn't salivate over it - they reported the event once and then it was never brought up again.


> I said very little was said

Really? Because from the citations I've given, it looks like a lot was said.

> That clearly isn't the case because it's happened already and journalists didn't salivate over it - they reported the event once and then it was never brought up again.

Because it was uneventful. It was a universally reviled thing that got banned, and rightfully so. There doesn't have to be "another side" to a story when that "other side" is filled with nothing but hate.


Britain First (the political party in question) is a fascist white supremacist group, known for harassment actions and violence, particularly against muslims.

They're the British equivalent of the NSDAP.


> Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

This is highly debatable. There are plenty of examples of benign movements and opinions that have been stifled violently by society and the state.

You can find with little effort plenty of people today calling for censoring or even being violent towards harmless liberal or conservative people because they are "communists" or "fascists". People are not good judgement and measured response.


Luckily, in civilized societies we have a system of checks and balances in place to ensure that doesn't happen.


Do we anymore?

Facebook and Twitter have become a digital public commons for discourse today. The check and balance is "what Facebook decides". That doesn't exactly seem like an adversarial check and balance system to me.


> Facebook and Twitter have become a digital public commons for discourse today

Absolutely correct, however, they aren’t the only places for public discourse. People have never been able to demand a newspaper print their article or that a magazine must include their story—people have always had the choice to start their own newspaper or their own magazine and build their own audience and this is still true today, in fact it’s much easier than it’s ever been.

People who’s business is access to human inputs, whether they are newspapers, music venues, theaters, magazines, etc.. have almost always had the freedom to set their own standards and it isn’t clear to me why business owners shouldn’t have this freedom anymore.


>Absolutely correct, however, they aren’t the only places for public discourse

Small comfort if they are the main places for public discourse - so cutting people and ideas there essentially means relegating them to far less reach.

Strange how when some foreign state censors FB or Twitter it's an outrage, but when FB or Twitter sensor people directly "there are other places".

Not to mention the monetary deplatforming (e.g. Mastercard, PayPal, Patreon and co not allowing funding), in which case there are no "other places" (not many in any way, and not reputable for someone to go pay there).

>People have never been able to demand a newspaper print their article or that a magazine must include their story

Which is irrelevant, since newspapers and magazines where always top-down affairs, written and curated by a specific team. Social media and platforms were supposed to be open to society (hence "social"), not only for a select team of journalists to have an account there.


> Strange how when some foreign state censors FB or Twitter it's an outrage, but when FB or Twitter sensor people directly "there are other places".

Because a government has a monopoly on violence, while a private company has freedom of association. You're conflating two different situations that are only superficially similar.

> Social media and platforms were supposed to be open to society (hence "social")

Yep, and that didn't work out so well. Hence, the bans.


An organization choosing not to publish someone is not censorship. People choosing not to listen is not censorship.

A government choosing what information you have access to IS censorship.

There are many organizations, anyone can start one. There is only one government and you can't escape it.


That wouldn't be a problem if these organizations hadn't captured 95% of the discourse. There is nothing in the definition of censorship that requires it to be done by the government.

The same sort power brokers that would drive censorship in a place like China are the ones who fund political campaigns, found think tanks, control media empires, and choose advertising spend, and use this leverage to drive censorship on social media.

In the end if the rich and powerful have effectively squelched dissent does it matter if it was through government mandate or some more complex mechanism though private means?


While true by the dictionary definition, the commonly-understood colloquial definition of 'censorship' is government censorship.


I don't believe that to be true, as evidenced by this debate itself and the proliferation of this same debate across the internet.


'a problem' and 'censorship' are two different things.


The censorship by private organizations would not be a problem if...


The litmus test I think we should use is "We should honor the intent of the user".

If a group of users explicitly want access to white nationalist content, they should be able to get it. So I would oppose blogs, webhosts, and cloudflare deplatforming anyone for any reason besides the outright illegal.

Facebook and Twitter are not just about serving content to those who have the intent to view it...infact the whole point of these social networks is that they expose content to NEW people who didn't initially have any intent to view. This is promotion, not access, and I have no problem with private entities choosing what they want to promote.

I would apply this same test to payments. Users who have explicit intent to financially contribute to objectionable content creators such as Alex Jones should still have a way of doing so. When Patreon, Matercard, etc etc deplatform him it closes the door to those who already have intent. Of course, I'm all for FB and Twitter shutting down the campaign so the word wouldn't spread nearly as far.

As a moderate liberal who finds sexual content over-censored, yet am disgusted by right-wing and anti-vax (anti-vax is often leftist!) conspiracy theorists, I think this test "honor their intent" test is a great way to keep the internet relatively sex positive, and extremist content relatively niche.


This seems like a relatively moderate view, and I like how it breaks down the individual freedom of intentioned users vs. the freedom of users who have no intention of seeing said content.

But at the end of the day, aren't the companies who are providing payment processing or website hosting profiting off of extremism and hate? If you'll recall, the reason why they started deplatforming individuals to begin with was that large swaths of people boycotted their services until they chose to no longer do business with said extremists.

Isn't that voting with our dollars? Isn't that the Free Market of Ideas in action?


Oh, was that a thing? For example was there a lot of outrage and pressure on payment processors to deplatform FetLife, or for Patreon to remove cam girls? I don't recall anything along those lines.


Luckily, "civilized societies" are some of the most deluded about this point.

From McCarthyism, to J.E. Hoover, to MLK, Gary Webb, to WMD, to the Patriot Act, to Snowden, to the "collusion" BS, to today's de-platforming, the establishment and the media easily stomps on whoever they don't like with impunity.


So, can we as humans learn from history? Can we establish better, more thoughtful, and more balanced societies as time and our understanding progresses?

Or have we already built the pinnacle of society at some past point, and everything we ever do in the future is doomed to be as bad or worse than what we already have?

I'd like to choose optimism here, personally.


Well, starting with free for everybody speech platforms, and letting people make up their mind, would be a good start.

Banning ads would also be another good start, but I don't see the idea getting very popular.


No, it would not be a good start. How do we know this? Because that's what the actual start was. And it led to Facebook becoming the carrier of all the hate people could convince each other to accept. And people targeted each other, conditioned each other, to normalize this hate and acceptance of violence against 'others'. And so we have the situation we have today, where Facebook was forced to acknowledge that they became a platform for hate.


How about instead of a one-stop-shop social network, we go back to the random topic-specific forums of yesteryear? It decentralizes discussion, allows individuals to freely associate among themselves, and doesn't result in a "one size fits all" mentality when it comes to moderation.


Indeed, this is a decentralised problem in need of a decentralised solution. Also relevant are https://hypothes.is/ and IPFS.


>They wouldn't do that, because there would be justified public outcry. Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

Why white? The CCP is running concentration camps for hundreds of thousands of Muslims right now, yet I see no ban on the only party that can be called National Socialist today, just because the nation they are supporting is yellow.


Which banned party (on Facebook) are you talking about? You said, '... yet I see no ban on the only party that...'. Which banned party are you comparing it to?


The CCP obviously (Chinese Communist Party, aka CPC), which he already mentioned.

Not sure if CCP has a FB page, but that far is obvious.


Is it though? I can't find anything that explicitly says that Facebook has banned the CCP. Could you provide any sources which say this? Also, the post I was replying to said:

> The CCP is running concentration camps for hundreds of thousands of Muslims right now, yet I see no ban on the only party that can be called National Socialist today, just because the nation they are supporting is yellow.

I don't see any way to infer this as saying that Facebook has banned the CCP–what am I missing here?

Also, if Facebook has indeed banned the CCP, well, turnabout is just fair play–after all, the CCP has banned Facebook from China.


This discussion is about FB and, more broadly, speech on private platforms. Not the Chinese Communist Party or any governments.


Ignoring the obvious attempt to bring fascism as somehow not right-wing, which is all too common on the right today, FB/Twitter etc. make exceptions for governments/public figures, including Donald Trump and the DoD.

I don't think they should, but I also know that if they didn't, Trump and others would get banned and the cries from the fake free speech worriers, (who are quiet as heck on ie BDS), would be a lot louder.


>Luckily, as moral agents, we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

Really? How about plain nationalists? How about communists? How about separatists? How about traditionalists? How about "Occupy Wall Street"? How about "nationalists" in e.g Iceland, a country where nationalists would be predominantly if not exclusively white in the first place?

If your idea of "general political party" is Democrats and Republicans and the occasional third candidate, ie. the bland two-party consensus that agrees on almost everything (foreign policy, more money to big money, etc), but disagrees on token issues (and that increasingly less), then sure.

But the movements and parties that change things up historically were never welcomed as "general political parties" by the establishment and the "good people" of the 10%.

I'm from a generally center-left-leaning country, where e.g. Reagan would be considered the epitome of nationalist and/or imperialist. Would it be OK for Facebook to censor Reagan (or some modern politician with the same ideas) on those grounds?


> Really? How about plain nationalists? How about communists? How about separatists? How about traditionalists? How about "Occupy Wall Street"? How about "nationalists" in e.g Iceland, a country where nationalists would be predominantly if not exclusively white in the first place?

Are they talking about racial genocide? No? Then they're a-okay.

I don't know why this is such a difficult concept for people to understand: If you're advocating genocide or violence against a people, you're gonna get banned. Pure and simple.


Yes, they wouldn’t do that. But no, checks and balances can not be based on something that would or would not happen.


> we humans are capable of differentiating "general political party" from "white nationalists", and can target the latter and not the former.

We’re also good at being boiled slowly, like frogs.


Which is debunked.


Regarding frogs, maybe. Regarding people, it has never been debunked, and it has been proven time and again to be exactly right.


This is correct. Frogs just aren't that dumb; it's been proven. Humans, on the other hand...


you say that as if we didn't just have two years of progressives and media smearing people they disagree with as alt-right.

Doesn't matter what they believe or say: Sam Harris? Alt-right. Tim Poole? Alt-right. Jordan Peterson? Alt-right. David Rubin? Alt-right. And that's just the leftists!


Are you sure? Do we even know, for sure, what "white nationalist" means? I'm white, and I favor a government that's principally oriented toward advancing this nation's interests. Am I a white nationalist?


You're right that some of these terms have ambiguity, but your example is silly - it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

But even setting aside your specific example, I don't believe that ambiguity should paralyze us into inaction. I think it's fair to say "OK, we'll ban anyone who advocates distributing political power based on race, with the white 'race' getting the most power... when people cross that line won't always be clear, but we'll do our best." For private action especially, we should not let the perfect be the enemy of the good.


> You're right that some of these terms have ambiguity, but your example is silly - it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

Stephen Colbert would disagree: https://www.youtube.com/watch?v=4nk0dUjYUNI

"You know why you're not supposed to use that word [nationalist]? Because it's the second half of 'white nationalist'. Chopping off the first word doesn't change what it means in our minds."


Stephen Colbert is also a comedian and not a linguist.


Good thing Facebook moderators are linguists. It was high time they get a real job.


> it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

Is it?

Or is this a deliberately conflated term promoted as an 'official' label in public discourse in order to dissuade association with those favoring the more benign meaning?

Agreed that ambiguity shouldn't create inaction - but it just as well shouldn't promote incorrect action either


> Or is this a deliberately conflated term promoted as an 'official' label in public discourse in order to dissuade association with those favoring the more benign meaning?

Why would you use the term white and nationalist together? Being white has very little to do with being a nationalist unless you believe it has everything to do with it, in which case you would be racist.


Maybe you wouldn't, but if you happen to be white, and taking an (inclusive, not race based) nationalist ideology, you can now conveniently be smeared by describing these two facts..

Oh her? don't listen to her, shes a 'white nationalist'.

Also, if, assuming this confusion to be true, having the term 'white nationalist' existing in the discourse as a negative, those who are not aware of the nuances between 'whites who happen to be nationalist' and 'those promoting a white nation' are pre-biased via faulty discourse to discount the words of 'whites who happen to be nationalist'.

Any popular terminology which deliberately overlooks subtlety and dismisses it when it is pointed out in the discourse is problematic. It's effectively a subtle smear campaign against the non-problematic nuances.

See also the strangely similar situation with the term 'skinhead' -

Initially this was a mostly apolitical working class subculture, most listened to soul music and smoked pot and listened to reggae, and many were apolitical or left/socialist leaning. Genearlly mildly populist, mostly white, but yes somewhat 'dangerous' in that it was a popular social movement of unconventional rowdy people of all stripes. (much like the 'disenfranchised trumpians' that the media is happy to highlight as contributing to the rise of the so-called 'white nationalism' we're talking about here)

Cue one politically motivated overtly racist subgroup acting up and stealing all the headlines, and now the entire term/culture is essentially taboo..

One can argue that this group just got the press and 'messed up the term', but at some point editorial bias is a factor.

For god sakes this is 'hacker news' I shouldn't need to explain this.


> it's well-accepted that the term "white nationalist" stands on its own and means something different from "white and a nationalist."

Many right-wing people claim that in practice, there's not: they're accused of being "white nationalists" for being "white and a nationalist".

Edit:

If people downvoting me think I'm wrong, explain why people like Jordan Peterson, who merely espouse non-leftist positions and happen to be white, routinely get accused of supporting the alt-right and neo-Nazis.

For the people who doubt what I'm saying -- here's a video of Jordan Peterson. This is who the leftists routinely call "neo-Nazis" or "alt-right": moderates who refute their positions and calmly assert values like personal responsibility over collectivist victimhood culture.

https://www.youtube.com/watch?v=o2bFzK2EdIo

Edit 2:

I'd originally written "conservative"; I'm not sure Mr Peterson would describe himself in such terms. I've made it more neutral.


I concur that terms like "Nazi", "white nationalist", "alt-right" and similar have essentially morphed into "person I don't like".

And this applies to both ends of the political spectrum: https://www.youtube.com/watch?v=nYlZiWK2Iy8


> have essentially morphed into "person I don't like"

I’d say it’s even closer to “people I disagree with”.


For many people the latter implies the former.


"White nationalism" is an overloaded term and until tonight I'd never heard the official definition of white nationalism until GPs comments.

I've just assumed it was a vague insult aimed at non-nominal conservatives.


Not sure how you reached the conclusion that white nationalism doesn't have a real meaning. Googling "white nationalism" gives plenty of results that indicate "white nationalism" is an ideology that promotes white supremacy and/or a whites-only nation/racial segregation.

To quote Merriam Webster:

"one of a group of militant whites who espouse white supremacy and advocate enforced racial segregation"


To be clear, I stated that I'd never heard the official definition and used context clues to come up with a running (incorrect) definition.

I am not of the opinion that white nationalism has no real meaning.


Words have meaning, and people know what those meanings are. "White nationalist" does not mean "white people who like their country" and it's disingenuous to pretend it does.


It's much more disingenuous to pretend that words and phrases have universal meanings.


Universal isn't necessary. Well-understood in the American political context is sufficient.


There is an infinite number of definitions for “like their country”.

Nazis in Greece like their country or what they perceive as “their country”.


Wrong. It's a deliberate attempt to language engineer the idea that loving your country is bad. This is not new, the power structures that benefit from centralization (correctly) see national sovereignty as an obstacle.

Note how you will never hear the term "ethnic supremacists" used by conventional media. It's too accurate and does not push the borderless agenda.


I don’t think anyone’s mentioned policing patriotism. You can be as patriotic as you want. America’s great, I love our nation, it’s people, culture, and ideals.

That’s different from saying things like “get that Spanish off the menu, this is America”, “go home foreigners”, and “immigrants are criminals”.

I’m sure you can see the difference.


There will always be some junk quotes people can find (or make up!) to support their agenda.

I don't really care what FB does, I prefer it to have all the rope it needs, but this normalization of taking and modifying the meaning of terms to fit the anti-borders narrative is dishonest and manipulative. Again, it's not even remotely a new thing, the anti-borders crowd has been gunning against nationalism time eternal.


"Loving your country" is not "white nationalism." You're the one trying to redefine terms if you believe it is.


It's an inherently dishonest frame. I cant imagine running around talking about "(insert color) nationalists" when actually referring to people promoting segregation of citizens.


“White nationalism” has been synonymous with segregation and genocide essentially since the inception of the term. I guess if you care really hard about that particular phrase this is a tragedy. But there are ample other, less fraught ways to express that you’re an American patriot. Bemoaning that you can’t say “white nationalist” to mean that seems a strange (or disingenuous) stance to take.


I called the phrase inherently dishonest... and somehow that indicates I wanted to use it for some other context? Like I had positive use for it?

Your comment suggests you have internalized the idea that nationalist and racist are the same thing. Or were you really thinking I wanted to use "white nationalist" (or any color) in some other context?


Isn't the phrase intentionally self selected by these groups? Then what makes it dishonest to use it to refer to their beliefs?


So totally ignore what "inherently dishonest frame" means, and attempt to change the subject. Fine.

Lets take your question as true for the sake of discussion. Do you take your language cues from these people? I don't. I don't see why you would let people you strongly disagree with decide what language to use. Are you concerned you might offend them by not using their preferred terms?

Might you be opening yourself up to some rather trivial social engineering opportunities?

Anyway... def don't talk about frames.


Are you implying outsiders won't find it completely obvious what they really are because of the name? Because it is obvious.


Why take language cues from people so eager to describe things in terms of skin color?


“White nationalism” is an inherently racist concept. Not all nationalism is though. You seem confused that adding the word “white” to nationalism makes the whole phrase mean something else entirely. Maybe consider the context of who uses that phrase now and how it’s been used historically to understand why that particular construction is broadly (and correctly) considered racist and genocidal. Or why other uses of nationalism with other nationalities or colors aren’t.


First you assumed I was "Bemoaning that you can’t say “white nationalist”" and now you are assuming I am confused and don't know what the colloquial use of the term is.

What do you think "it's an inherently dishonest frame" means?

Frames are important, it's why I asked why you thought I was bemoaning the loss of a phrase when I was really describing how the phrase itself is dishonest.


[flagged]


That depends. When you say 'Keep Poland Polish', what exactly do you mean?


Yes. Because history demonstrates our ability to do that - not to allow factions like the KKK to triumph.

We’ve done this kind of thing before, and we’ll do it again. Facebook responding like this IS the marketplace of ideas reacting.


I've noticed this tendency among a lot of right wingers. Lately I've seen it a lot in metalhead circles, particularly black metal, which does have a bit of a nazi problem in some parts.

They'll crow and gloat about how the scene isn't a safe space, that it's founded in hate and intolerance. But as soon as well-known nazi/NSBM-related bands or members of the scene are deplatformed and antagonized for their views, the tone goes straight to "get these leftists out, this is a right wing scene! This is censorship!" and so on.

Apparently only right wing radicals should be allowed to have safe spaces... /s


We know, for sure, the dictionary definition. Consider looking it up & assessing whether you wish to apply the label to yourself.


>Consider looking it up & assessing whether you wish to apply the label to yourself.

But this isn't what will happen. Expressing skepticism about immigration - a fairly normal and mainstream opinion in the 1990s and early 2000s - can easily be construed today as "white nationalism" by many on the political left. It's not you who gets to define what your views are on these platforms, it's the "moderators".


Openly questioning the validity of the capitalist system has been construed as hardcore hunger and famine bread lines planned economy USSR-style communism for decades.

Welcome to not having a safe space anymore.


Openly questioning the validity of the capitalist system is fine. Facebook shouldn't be banning people who don't like capitalism from their platform either.


you are a complaining about your ideology being ridiculed because its bad and leads to awful results, your interlocutor complains about not being able to voice his opinion without actual persecution.


will neo-nazis be able to side-step appropriate regulation by framing hate statements as questions?


In Sweden perhaps not since white and nation is analogous but in US you are going have a problem.


Yeah it means neo-Nazis who believe in the innate supremacy of the white race. If you’re one of them I want you to be deplatformed immediately and without sentimental hand-wringing about liberal platitudes.

This will be an ongoing project and will require constant tweaking and adjustment, but I’m all for their removal from the public sphere.

There are underlying economic and social issues leading to the reemergence of this worldview that need to be dealt with urgently and with peacemaking intention. Still, in the meantime this kind of thought if unchecked leads to genocide and must be stopped.



you think so? republican stuff is already regularly banned from various platforms, mainly because tech companies lean left.

i dont consider myself republican, but the slippery slope seems all too real in this case.


I'm curious, which platforms ban Republicans?


OP said "Republican stuff", which if he meant that various conservative/Republican leaning stuff has been censored on Facebook, that's definitely true. Search around; most cases typically range from "persecution complex/not actually censorship" to "automated processes have implicit bias" to "neutral on the surface but always seem to hurt conservative causes". With FB you tend to see automated things catch conservative stuff, then get overturned on appeal rather than permabanning people.

Facebook banning outside ads for the Ireland abortion referendum comes to mind, as well as the case where a Christian satire site was threatened with a ban for spreading false information because Snopes had fact-checked one of their articles as false. Susan B. Anthony List had a bunch of pro-life ads banned; I think most of them were restored.

https://stream.org/328684-2/ this is a more recent case that follows a familiar pattern: Flagged, rejected appeal, then mysteriously overturned appeal some days later.


I'm sure that tons of conservative/Republican leaning stuff has been censored on Facebook then overturned on appeal.

Hopefully, you also don't doubt that there are also many examples of Facebook censoring liberal/left/non-Republican whatever stuff, and then overturning on appeal, many times mysteriously so, after initially rejecting the appeal. (This post presents like a dozen examples in the genre of black activists being banned for discussion of racism, such as uploading screenshots of racist and sexist harassment they received. You might also want a warning that these examples are interspersed with the author's strongly opinionated and bellicose comments. https://medium.com/@thedididelgado/mark-zuckerberg-hates-bla... )

Due to their scale and the shittiness of their algorithms and processes at this kind of thing, surely you're not surprised there are tons of examples on every side. (This article documents more examples on all sides, including exemptions for a prominent Republican: https://www.propublica.org/article/facebook-hate-speech-cens... )

What reason do you have to believe these mistakes always seems to hurt conservative causes or that you tend to see conservative stuff get caught, disproportionately more than non-conservative stuff?


Can't know for sure, of course. If someone wants to start keeping score, I'd be interested in the results.

If you want to convince me that it's more of a "establishment/fringe" divide than "left/right", that ProPublica piece goes a long way towards making that case.


It's generally considered a "conservative" or "republican" position to contend that males and females differ biologically and that sex is an immutable characteristic. On twitter, if you express this idea towards someone who is trans or advocating trans-issues you will very likely get banned.


You know, those world renowned leftists that run Twitter and Reddit...


> Tim Pool is right, at this rate, sooner or later, the Feds will come knocking and will shut that party down.

What party? The government is gonna come into a private organization and tell them they are obligated to spend money to preserve, host, and broadcast hate speech that doesn't align with the company's values?

Please tell me how this differs from telling small business owners they can't refuse service.


The safe harbor protections from copyright violations for user uploads require neutrality in content hosting, because pre-approval implies you must also vet copyright.

They're welcome to censor to editorialize, but they lose their protections from copyright suits for relaying copyright material uploaded by their users.

For Facebook to be immune from copyright liability for my uploads, when they display them to others publicly for profit, they cannot express prior restraint over my upload. Such commercial copyright violations carry heft penalties, in the thousands of dollars per view: Facebook and Google can't operate in an environment where they're liable to such a degree for uploads.

The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.) They're free to not accept that deal, but they're liable for their commercial copyright infringement in that case.


Are you a lawyer? If not, can you point to a legal expert explaining why a court would interpret the law as requiring neutrality, or as waiving the protection in the case of pre-approval or vetting copyright, or as requiring that the immunity is only provided in exchange for supporting free speech?

I read and re-read both the DMCA 17 USC § 512 and Section 230 of the CDA, and as far as I can tell, the DMCA only requires responding "expeditiously" to DMCA takedown notices and court orders, and the CDA has no conditions at all but doesn't protect from copyright liability in the first place.

In fact, the CDA explicitly states that its liability protection DOESN'T require neutrality, and extends to “any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected”. See https://www.eff.org/issues/bloggers/legal/liability/230


There's a question of if a service which pre-filters (to editorialize) content from a user is actually qualified under 512.c at all -- since the content is no longer at the direction of the user, but at the editorial approval of the service.

An anthology is not exempted under 512.c.


I see. So am I understanding correctly that when you had asserted, in the comment I was replying to, that protections from copyright violations require neutrality in content hosting, you were not referring to settled case law, but rather an untested legal theory that you personally support but that has not been tested in the courts?


I am a lawyer, and while I know nothing about the law in question (not even close to my specialty), I can tell you that statutes are only half of the story. The other half would be court decisions involving those statutes. We live in a common law country which means that court decisions & precedents act as law themselves. They can't just overturn a statute or ignore it, but every court decision on a statute acts as a further refinement to the law in question.

So we'd have to pull relevant case history. You can't just look at the words in a statute. Each single term of art could have a chain of cases arguing over the specific meaning of that term.


> The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.)

No, it's not. You seem to be mixing an inverted understanding of CDA Section 230 (which was instituted to avoid discouraging host moderation of user content, because prior to 230 exerting any such editorial control risked a host being treated as a publisher rather than distributor, with greater liability exposure for the user content) with the DMCA safe harbor (which, unlike CDA 230, applies to copyright claims.)


> The safe harbor protections from copyright violations for user uploads require neutrality in content hosting, because pre-approval implies you must also vet copyright.

Is this a 'A therefore B' thing due to the way the laws are written, or is this explicitly called out someplace?

> The government recognized this about internet services, and granted them immunity to copyright related suits in exchange for supporting the American value of free speech on their platforms. (This is a law.)

I did a bit of looking (though not a lot) to try and find some sources on this but I wasn't able to really uncover anything that supports this.

Do you have any source available? I'm interested in reading into this idea further, I find it rather fascinating.


Look up Communications Decency Act, Section 230. This is a good primer: https://www.eff.org/issues/cda230


I read the entirety of the page you linked to. Nowhere I found does it say that neutrality is required for the protection to apply, nor that pre-approval or vetting copyright would waive the protection, nor that the protection is in exchange for supporting free speech. Could you point that out to me?

In fact, it links to another page which states:

        Wow, is there anything Section 230 can't do?
    Yes. It does not apply to [...] intellectual property law
https://www.eff.org/issues/bloggers/legal/liability/230

Which directly contradicts your implication that Section 230 is the law that grandparent was referring to when they stated:

    The government [...] granted [internet services] immunity
    to copyright related suits [...] (This is a law.)
Furthermore, Section 230 explicitly states that its liability protection DOESN'T require neutrality, and extends to:

    any action voluntarily taken in good faith to restrict
    access to or availability of material that the provider or
    user considers to be obscene, lewd, lascivious, filthy,
    excessively violent, harassing, or otherwise objectionable,
    whether or not such material is constitutionally protected
(same link)


> Look up Communications Decency Act, Section 230

Sure, but it (1) doesn't apply to copyright—that's the DMCA safe harbor not the CDA one, and (2) was specifically created to eliminate the added liability web hosts were then subject to if they engaged in content moderation, not to require them to abstain from moderation to secure the safe harbor.



No neutrality is required, only best effort removal of illegal material

https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepre...


The US government forces companies to spend money on many things they don’t want to do. Regulatory authority is extremely broad.


The government tells power and phone companies that they can't cut off service to people discriminately.

I'm pretty libertarian in respects to market regulation, but when it comes to mopolistic industries you need regulation.


> Do we wait until it can be shown that companies such as Facebook, Twitter, Google, etc. influenced elections because they refused to serve information from a candidate they didn't like? If we aren't already there, it's not long before it is.

That's an interesting question. What if Google and Facebook would secretly demote and hide articles about one political party and promote articles about another party? Theoretically they are absolutely free to do so but how do you think, will it cause a Congress investigation or not? Those people made noise about much smaller things.

Also, what if Google would start demoting its competitors and everyone who deals with those "untouchable" companies? That would be interesting to see.

I think people don't realise what power these new media have. The monopoly of Facebook or Google is a serious issue and we should not believe that they will always stay neutral.


Well, sure, they could do all sorts of bad things. Normally we don't go after people until they actually do those bad things though, instead of removing their ability to act because they might.


This already happens. "Mainstream Media" doesn't give equal coverage to 3rd party candidates.


Or even DNC candidates, if they're actual progressives: https://decisiondata.org/news/political-media-blackouts-pres...


[flagged]


Here's a recent quote from Alan Dershowitz that's illustrative:

> I received off-the-record information that an order had come from the very top: CNN executive Jeff Zucker didn’t want me on CNN any more. My centrist, nuanced perspective was anathema to CNN’s emerging brand as the anti-Trump network.

That should make it clear that a very small number of media executives ultimately decide what we see on the news.

https://thehill.com/opinion/white-house/436059-alan-dershowi...


Not to go way off topic here, but Alan Dershowitz is the type of person to say anything as long as there’s benefit for him (see: extremely highly paid defense attorney for billionaire child rapist Jeffrey Epstein, and last year Harvey Weinstein), and while he may not be wrong your comment sort of holds him up as having any sort of credibility on the inner workings of the media world. Nobody should take that man at his word.


As a lawyer it's his job to defend people accused of horrible things. If we aren't going to celebrate the fact that accused people can get a good defense, we might as well just do away with trials all together.


I agree that it's important to have lawyers defend people accused of horrible things because they could be innocent of those charges. The thing I think I disagree on is that when it is a single lawyer or a group of lawyers constantly defending a group of connected (through power and money) people, it tends to become a lot less clear that they're doing the moral duty of defending all accused instead of defending the accused that will line their pockets the most. The problem here is the one observed in the justice system as a whole, which is that routinely people with less money are disadvantaged in the system due to the lack of money to "convince" these lawyers that shield their actions behind the high-minded moral of defending all accused individuals.

In theory: All accused get representation regardless of accusation In practice: Only the rich who are accused of vile things get a proper defense.


I wasn’t arguing about the merits of having aggressive attourneys, I was just indicating that this is a person who’s entire career is built around distorting the truth in his (or his clients) favor.


Why should we trust Alan Dershowitz without proof? Giuliani is a lawyer and he goes on TV all the time to lie about easily verifiable facts.


Perhaps the other way around. The media was Hillary’s tool. She asked for increased coverage on Donald Trump because she thought she could beat him easier than anyone else (if you don’t like Trump, thank Hillary) asking to the pied piper strategy from the Podesta emails. She was funding the DNC so she could deliver or withhold media access.

Small point and while I agree overall I’m not sure the media ‘selected’ Hillary. I think Obama Admin did, and after all, it was her turn.


Neutral might not exist.

> It seems to me that Facebook and Twitter are trying to have it both ways.

Of course Facebook and Twitter are trying to have it both ways, and, indeed, all ways — there are many more ways for people to communicate, or stances to take, than “both”.

Platforms like FB and Twitter hope to be the communications backbone of the world. The problem is, the world has opinions on what kinds of communications are acceptable. These platforms try to stay neutral, but the people are not.

Neutral does not exist. It's all relative.

In a polarized world, a neutral platform will die because either side won't like it. In a more interesting world, it might still die because people don't like people who don't think like them.


I like your idea that "neutral might not exist." FB is going about this in the obvious control-oriented strategy: we have a problem, ok, we'll make a rule against it. This doesn't work, and can only lead FB to having lots of rules and everyone unhappy with them.

The problem with FB is that they have built a system that rewards polarizing opinions. Edward Deming said that your system is perfectly set up to give you the results you are getting, so if you want different ones, you need to change your system. Incentivize quality, disincentivize "viral-ness". Maybe limit viral-ness. Optimize for something besides addictiveness^Wengagement. Admit that people think, say, and do harmful things and build a system that is robust to it. Add some kind of negative feedback for posts.


Here's another "way" that's ignored: Nearly all services give users little to no control over the content they see. They can't self-moderate or filter the content coming their way. Instead, users have to "Appeal To Authority" (whether Facebook or the Feds) in order to make changes. It's incredibly disempowering in both cases, and doesn't need to be this way. "Mods" on Reddit help. Page Owners on FB pages help. However these are still "Authorities" that must be appealed to. Even resorting to contacting advertisers to pressure them to not sponsor "bad" content is still an Appeal To Authority.

Essentially, you have no control so the only solution is to not participate or appeal to a higher authority. Both are terrible.

In a weird tangental side-thought: The Internet is to Western Capitalism what Glasnost/Perestroika was to the Soviet Union. The opening of information, while allowing many great things through, also removed the filters that kept harmful content on the margins. In the Soviet state, it was the authority of the State that dictated content. In the Western world, it's mostly those who own/control large media platforms. Since liberalization, each situation found The Authority under acerbic attack from these new wellsprings of content, both legitimate and illegitimate.


"Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas."

Come over to Australia, that is what our traditional media has been like for years!


They're protected by the US first amendment, unconditionally so given that online websites by design work through publishing. You can't compel them to host unwanted speech.

As for liability, here's why the laws intentionally shield them;

https://www.lawfareblog.com/ted-cruz-vs-section-230-misrepre...

And finally, there's nothing forcing you to use them. Convenience (or lack of it) is not a sufficient argument for regulation.


>Imagine if there were only three newspapers in the entire country, soon the world, controlled by a small group of people who wish to use their publishing for their own agendas.

Could you explain how this is any different than the handful of television companies for the past 70 years?

As best as I can tell, the older demographics seem to be the ones primarily watching the news, and it shows in polling data. Their opinions and narratives are easily manipulated by the "news" networks they are faithful to. Companies run by a "small group of people."


>At what point do such things change into a public discourse problem?

When they censor opinions that the top 10% agrees with, since those are that control the media and what's acceptable.


I don't see Germany tolerating any support of National Socialism in their public sphere and rightly so ... some ideologies just too bloody abhorent and against the public good.


For now. The passing of time and new generations will result in the loss of the lessons of the past.


Contrary to popular believe we already had laws against hate-speech in the middle of the 19th century. The authoritarians used that regularly to underline their alleged prosecution. And it worked because they were right to a degree.

You can only remember the lessons of the past if you understood them in the first place. And frankly, arguing for more content controls by authorities is an insult to anyone actually having an understanding of those times.


slippery slope. at the moment there is no dialogue. commenting on white supremacist content on facebook and calling it out currently will get you banned. this statement purports that they will be more balanced from now on


> influenced elections

In fairness to Google and Facebook, etc. TV networks have always been super biased and had far more reach than internet platforms - and it was actually conservatives who shut down (with good reason) the "fairness doctrine" that would have forced them to carry content they didn't necessarily agree with.


You listen to Tim Pool? He’s a right wing psycho that hides behind fake journalistic integrity to cherry pick stories for his target audience of white supremacists, xenophobes, and other anti-social nut jobs.


I, for one, listen to everybody I'm allowed to listen to.


We're already there.

Repeating a previous study, which had shown bias in the 2016 election, Dr. Robert Epstein shows that bias in the 2018 election pushed voters towards democrats. Approximately 4.6 million undecided voters are likely to have flipped. Numerous districts, particularly CA 45, are likely to have been flipped by Google's weaponized bias.

There has also been an awful lot of bans of beginner politicians, all on one side, often corrected after the election (damage is done) with a lame excuse about algorithms making mistakes.


Do you have citations for any of this?



These are both written by the same person, and amount to spreading FUD about liberal bogeymen and painting conservatives as a victim.

For example, he is quoting Hillary Clinton instant search debacle as an example of bias, which was debunked as being technical illiteracy:

https://www.snopes.com/fact-check/google-manipulate-hillary-...


I'm guessing not.


I don't believe these slippery slope argument are sound. You can apply these kind of arguments to almost every alleged free speech issue. Just make it an argument against banning <insert whatever content you find absolutely reprehensible and clearly worth banning>.


In principle, who could be opposed?

In practice, how will this actually work out?

True story. I have a friend who got a temporary ban from Facebook (I think 90 days?) for commenting on a story about a Texas billionaire paying to hunt endangered animals, "How much would it cost to hunt Texas billionaires?" That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity. Will we see discussion of a future such measure banned by Facebook on such grounds? And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?

Facebook is going to have some difficult conversations ahead. And the more of these lines that they draw, the more difficult boundary cases they will run into.


In principle, I am opposed. I don't think hiding unwanted dialogue is very helpful in the long run. It just makes it fester somewhere else that isn't as visible. If Facebook feels they have to do something about it, then I'd suggest just flagging it. That gives people more information that they can use how ever they see fit. There is also the thought of collateral damage, like your friend ran into.

I also oppose it on general grounds in that I'm a big proponent of freedom of expression. Facebook can do what they want, we're not talking about the government here and I understand that, but I would rather they didn't go this route. I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent. I seem to be one of the few that feels that way, though, any more.

There is also the practical concern of what happens if the ideas of what is acceptable and what isn't changes over time? Who knows which opinions that you hold now might become anathema at a later date. For example, the opinion I'm expressing now used to be a lot more common than it is today.


> In principle, I am opposed. I don't think hiding unwanted dialogue is very helpful in the long run. It just makes it fester somewhere else that isn't as visible.

Let's prod at this:

- Should ISIS recruitment videos and propaganda be allowed (nay, encouraged) because deplatforming them will make them put their recruitment pamphlets elsewhere? Where else do they put their recruiting materials?

- Should we publicly and loudly encourage people to self harm, ideate suicide, etc. because if we don't, they'll just find secret places to do it?

- Should we consider white supremacist recruiting materials "dialogue" at all? If we're talking about dialogue, as in a formal debate between Richard Spencer and pretty much anyone else, one on one, that's potentially interesting (also embarrassing for Spencer). On the other hand, a video posted by a white supremacist isn't dialogue. It's even less dialogue when they can delete comments they can't aptly respond to, and when the people there are already interested (Richard Spencer videos weren't ever going to cross my feed). You're calling this dialogue, but it's really a one sided dog and pony show with maybe some unlucky sacrifices. Perhaps we shouldn't ban dialogue between white supremacists and normal people, but you're not advocating for dialogue, your advocating for Facebook (and whomever else) to support (distribute, platform, etc.) white supremacist propaganda and theater.

>I don't want to live in a society where what can and can't be said is strictly regulated either by force or by general consent.

How do you propose to create a society where people aren't allowed to dislike you? That's what you're asking for, essentially. "Freedom from consequences" is the common way of putting this, but really what you're asking for is an infringement on my freedom of association. If you piss enough people off, that'll come back to bite you. What's the alternative? That people can't hold you accountable for your previous words? That quickly devolves into a society of 4chan, which, well, I'm not sure why you'd want to live in that.

>For example, the opinion I'm expressing now used to be a lot more common than it is today.

How certain of this are you? Did black people or women have the freedoms you suggest 100-150 years ago in the US? Was anyone advocating for that?


> Should we consider white supremacist recruiting materials

How do we define "white supremacist recruiting materials? A significant number of people have said to me, un-ironically, that opposition to immigration is an instance of white supremacist speech. Same with opposition to affirmative action (they even called a crowd full of mostly Asians opposing affirmative action white supremacy in action). And I live in Bay Area, same place where Facebook is headquartered. That's the problem with trying to define ideological blacklists. Once you put a category onto the blacklist, everyone will try to push their political opponents into that category. This is how you get things like "learn to code" becoming a ban-worth offense (but only when directed to journalists).

Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".


>Nowhere on the page linked in the original post does Facebook define "white nationalism" or "white separatism".

Do they define ISIS? That didn't seem to cause as much concern.

As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.

I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.


> Do they define ISIS?

ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.

> As for the rest of your post, I'm baffled that you somehow think that "preventing people from trying to kill people" is somehow a negative thing that should be criticized.

"Preventing people from trying to kill people" was already against their terms of service. The whole point of this announcement is to announce the fact that Facebook is expanding their prohibited categories beyond "preventing people from trying to kill people".

> I also think that if I were consistently being mistaken for a white supremacist, I'd think about why that were happening, and probably try to distance myself from the things that were causing those mistakes. Your view appears to be that it's better to just prevent people from voicing their confusion.

As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist. Others have told me that supporting any expansions of immigration restrictions is white nationalist. A few have even told me that opposition to affirmative action (even among groups in which Asians are the ones primarily opposing it) is white nationalism. I did not get the sense that they were saying these things ironically or in jest. Are these views tantamount to "preventing people from trying to kill people"? I live in San Francisco, which while not exactly the same environment as Menlo Park, is still in the same metro area as Facebook's HQ. There's a significant possibility that folks with similarly liberal definitions of white nationalism exist at Facebook.

Also the way you say you would "probably try to distance myself from the things that were causing those mistakes" really makes it sound like the chilling effect this has on discussion is a feature, not a bug. As significant number of people suspect that tech companies' expansions of prohibited speech is becoming a means of partisan manipulation. Statements such as yours likely reinforce this belief.

> Your view appears to be that it's better to just prevent people from voicing their confusion.

I am not trying to prevent anyone from voicing anything. The issue is that a significant number of people do confuse (or deliberately label) mainstream political views with "white nationalism" and "white separatism". Thus, Facebook's banning of these things is very likely to be seen as - and perhaps actually be implemented as - a means of suppressing legitimate political discussion. It probably would have been better to keep their prohibited categories the same, and perhaps more aggressively police certain circles and keep their policy - as you put it - "preventing people from trying to kill people"

There's already enough suspicion that Facebook is acting in a partisan manner, and more stuff like this is going to inspire ever greater calls to enforce stiffer regulation on tech companies and perhaps even breaking them up. This announcement seems like a shot in the foot for Facebook.


>ISIS is (or was) an explicit political group, an organized proto-state. It literally called itself a State. It was very clearly defined.

Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.

>As I stated earlier, multiple people have told me that desiring stronger border security and building the border wall is white nationalist.

Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.

Something about everywhere you walk smelling like shit and all.

manfredo 27 days ago [flagged]

> Are you suggesting that groups like the daily stormer, the national policy institute, etc. are not organized political groups? The national policy institute is quite literally a lobbying organization.

Facebook's announcements do not specify these groups in particular, and do not seem to indicate that its definition of "white nationalism" and "white separatism" will be nearly as clearly defined as banning recruitment to the Caliphate.

> Like I said, if I were being confused with a white nationalist with any regularity, I would take steps to correct that perception, perhaps by trying to understand why people think such things. Through conversation, directly, with those people, or on my own with research. Not by trying to appeal to some higher ethics that the people who think I might be a white nationalist are wrong.

> Something about everywhere you walk smelling like shit and all.

Sorry to burst your bubble, but calling people white nationalists almost certainly isn't going to change their views. Quite the opposite, all it accomplishes is lessens the severity of these terms and makes it such that people roll their eyes when they see the label thrown around. And it further alienates people like me, who do support immigration and affirmative action etc, but are increasingly turned off by the every diminishing threshold at which terms like these get thrown around. Furthermore, it makes it even harder to distinguish between actual white nationalists and legitimate views that people are trying to make socially acceptable by painting them with the white nationalist brush.

Something about crying wolf and all.


[flagged]


> So your concern is literally just "I don't know how this will be enforced." If so, why not just...wait, and voice your concern when you have evidence of facebook abusing this, instead of doing what you're doing now, which looks to me a lot like what you disdainfully refer to as "crying wolf".

After that, the damage has been done and trust in Facebook will have been diminished. And there is a very big difference between "crying wolf" (as in, actually mislabeling something benign as something dangerous) and voicing concern based on previously observed behavior. Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.

> Sure, but there are few enough white nationalists now that I don't particularly need to worry about them. The issue is if their mindshare grows. And calling them out keeps it that way.

You seem to be missing the core point I've been making. The issue is not with actual white nationalists which, as you point out, are few and far between. The issue is with mainstream views getting consistently labeled as white nationalist which introduces several issues. One, it makes distinguishing between the former and the latter more difficult making it very likely that mainstream views get banned under the label of white nationalist. And two, it makes it so that people are less concerned with claims of white nationalism thus making it more acceptable for the actual white nationalists to operate openly.

> See, I know a lot of people, and the only ones who ever say things like this are ones on the internet. I've never met a real actual human who is so concerned at the thought of someone calling someone else a name that they're going to stop supporting immigration reform or affirmative action.

You're right, but you're refuting a straw man. Alienation doesn't mean ceasing support for particular issues. More often than not it means refusing to identify with political groups. This isn't speculation, this is backed up by evidence. Record numbers of people don't identify with either Democrats or Republicans [1].

> Like I said, you seem really, really concerned about not being called a white nationalist. If you really do support things like affirmative action and reasonable immigration policy, I'm not sure why you have such a concern. Either no one is calling you a white nationalist, in which case, again, why do you care about the nonexistent strawpersno who might do so? Or, the one person who is is so fringe as to be easily ignored.

Yet again, you're talking about things I never wrote. I do not get called white nationalist and I don't think I ever have been called as such. But I do see co-workers and former classmates call mainstream views white nationalists, and I do see how it makes political discussion toxic and non-productive. These aren't strawmen, these are people I go to work with every day that are adopting stances that make it impossible for them to engage with people with opposing political views beyond hurling insults. Even though they aren't calling me a white nationalist, they're still calling my conservative family members and friends white nationalists when they say things like supporting the border wall makes someone a white nationalist. This isn't healthy for a democracy and it makes me not want to identify with the groups that they are a part of.

And to circle back to the original point that was made, the prevalence of mainstream getting called white nationalist makes it a real possibility that Facebook will start banning mainstream political views. If "build the wall" starts getting banned as white nationalist as many of my co-workers want it to be then Republicans are going to be very eager to bring down the regulatory hammer on Facebook and perhaps big tech companies in general.

> https://news.gallup.com/poll/225056/americans-identification...


>Calling opposition to immigration white supremacy is the former, saying that there's a distinct probability that Facebook's enforcement of these rules will impact non-white-supremacist views is the latter.

So, you have historical evidence of facebook mislabeling things that are clearly not white supremacist as white supremacist?

If not, then yes, you're absolutely crying wolf, because you're ascribing the behavior of some entity to some unrelated entity, apparently based on geographic location (fun fact, the people enforcing FB's policies are probably in Austin or Phoenix[1], not MPK).

Also, your poll is outdated[2]. The numbers are back up. And looking at broader trends, more people identify as liberal than ever before[3], so all of this stuff that you think is causing this shift towards conservatism, or at least away from identifying as a democrat, isn't, since people are more willing to identify as Democratic and liberal now than in 2016 or 2012. I don't think these are related, but since apparently you do, I hope that this helps you understand that your reaction appears to be the minority reaction, and that this public shaming that you so despise is working.

[1]: https://www.theverge.com/2019/2/25/18229714/cognizant-facebo...

[2]: https://news.gallup.com/poll/15370/party-affiliation.aspx

[3]: https://news.gallup.com/poll/245813/leans-conservative-liber...


Allow me to summarize the story of the boy who cried wolf, because you seem to not have the correct understanding of the story:

In a village there exists a boy that is tasked with protecting a flock of sheep by shouting "wolf!" if he sees a wolf, to alert the townsfolk to come to his aid. Out of boredom (or self-satisfaction of his ability to get a reaction out of the townsfolk, depending on the variation) he shouts "wolf!" despite not seeing any wolf. After a couple instances of false alarm, the townsfolk no longer heed the boy's alarm and do not come to his aid when a wolf really does attack the flock.

The boy knew that there was no wolf attack, but claimed that a wolf was attacking the sheep anyway. I am doing no such thing.

Facebook as only just announced this policy, so no one has any observation of how they are enforcing it. This is obvious. The post itself states that the policy will only go into effect next week. That you are asking me if I have historical evidence regarding a policy that has yet to go into effect does not indicate that the original post was read in much detail (though it does make your previous statement that this announcement is about "Preventing people from trying to kill people" a lot less surprising).

I am not, and have never, claimed that Facebook is using the guise of white supremacy to ban mainstream politics. Again, this policy isn't even in effect yet. I am, however, highlighting the fact that a significant segment of Facebook's work force (tech workers in the Bay Area) espouse a view of white supremacy that does categorize things like opposition to affirmative action and immigration as white supremacy. Will this impact the enforcement of their views? I don't know, but my take on this situation is that it's a significant risk.

Also, I'm not sure why you're claiming that my data is outdated. My data was from 2018, at which point independents were at 44%. The latest figure on your linked Gallup poll is 42% - not very far off. It's still significantly above the historical average of the mid 30s.


If you want an analogy, then a white nationalist terrorist organisation like, say, Atomwaffen or Combat 18, would be the equivalent to ISIS. There's no problem banning those - indeed, we already have laws for that.

On the other hand, banning white nationalism as a whole would be more like banning Salafi Islam as a whole, instead of specifically ISIS, al-Qaida, al-Shabab, Boko Haram etc.


>There's no problem banning those - indeed, we already have laws for that.

This is not correct. It is not illegal to be a member of, or recruit for, Atomwaffen. Its illegal to commit crimes. But associating with a group isn't a crime (for good reason!). That doesn't mean that we shouldn't take steps to keep people from associating with people who will cause them to commit crime.

Or like, should we just go whole hog and encourage MS-13 to post recruitment videos on youtube too.

As for salafism, no that's akin to something like the WBC, which while reasonably considered a hate group, hasn't been banned from anywhere as far as I know.


My understanding is that membership of terrorist or rebel groups is illegal (and in fact can be grounds for US force to kill US citizens without trial like several US citizens killed while being a member of Al-Qaeda* ). But membership of groups that are... let's just say "unsavory" is not itself illegal. I don't know about groups like Atomwaffen to determine whether they belong to the former or the latter.

\* Which did stir up some controversy, but is not all that surprising. The precedence for this dates back to the US Civil War, it would be ridiculous to claim that the Union Army criminally murdered hundreds of thousands of US citizens on the battlefield without trial when the latter were fighting for the Confederacy.


What about this for an idea (and I'm not sure I love it, but it's just spitballing): We allow expression of any _ideas_, but we put a limit on _recruitment_. I.e., anybody gets to say "White people need a home country" or "the Caliphate shall reign supreme", but nobody gets to say "come to the meeting at 10th and Main 5 o'clock". That way we at least know what's on people's mind so we can engage them but we're putting a limit on helping them grow the movement (until and unless we decide they have some sort of valid point, which is part the point of letting them speak).


The recruitment rarely happens on the platform. The "normal" process is

1. Get exposed to $radical_group on social media

2. Go searching for $radical_group because their ideas are intriguing/edgy/whatever

3. Find the actual site of $radical_group, and now they control the entire process.

Think of it this way: you see a poster for some cool band that says "RadicalPandas's music will change your life. We have concerts, but they're secret and we can't tell you where they are because the man wants to keep us down". If anything, that sounds even more edgy and intriguing. So instead you just have to tear all of the posters down.


People with mental problems are always going to exist. Using them as props to keep chopping away at the boundaries of free speech is foolish.


The really radical stuff doesn't happen on open platforms, they're more of a tool for early exposure.

The actual planning and such happens on private sites with secret forums.

As a real-world example, a Danish right-wing nationalist network called ORG was exposed in 2011, after having allegedly been active since the mid-80s. Its members counted a number of well-to-do individuals, as well as members of the police and known violent white nationalists. The group ran and had control over several sports clubs, including martial arts and gun clubs, which they used for training members in street fighting and tactics.

They ran a database of basically every politician and public figure who ever espoused left-wing views, and thousands of ostensibly left-wing citizens as well, labeling them as "traitors" to be "dealt with".

At least one of their members was from the armed forces in some security-related capacity, and they had reasonably good opsec procedures in place.

I have no doubt that even after being exposed and having its members publicly named, the group or a new equivalent still exists, with new secret sites and a heightened level of paranoia towards possible infiltrators.

By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.


Do you have a way of going about this that isn't authoritarian? (Extending the definition of "authoritarian" to Facebook acting within its own platform). EDIT: Sorry this was really vague. By authoritarian in this case I mean that some authority has to decide what is and isn't an acceptable opinion (as opposed to, say, deciding on an impartial process that somehow weeds out bad opinions).

And somewhat related, does this mean you have as dismal a view of humanity as it seems, that you don't want people exposed to dangerous ideas? This all seems necessarily paternalistic. EDIT I can't imagine, for instance, trusting in democracy with such a view.


I agree with KozmoNau7 here, so many notions of constructing some system that will somehow weed out the "bad" ideas seems misguided in my opinion. The reason is that these systems necessarily treat all ideas as equal valid inputs by requiring anything be allowed. This is done when we have bountiful evidence to the contrary, evidence that fascism and racism lead to horrendous consequences if left to spread through disingenuous tactics and deception.

In fact, we humans already form a decent system for weeding out bad ideas/opinions, we just don't listen to our own past experiences on the matter. We found out that fascism is putrid and yet now we try to come up with a new system that will weed it out instead of just chucking it into the garbage bin and moving on.

It's not paternalistic or a dismal view of humanity or that "humanity cannot be trusted with such a dangerous idea", it's because the people advocating for it constantly lie about it and intentionally trick/indoctrinate others into following. Under the right circumstances of me growing up, I fully believe someone could have deceived me into believing it, so I don't think I'm better than anybody who got sucked in. We can trust in democracy as long as we take proper precaution against things that prey on the freedom of expression and association in order to remove those rights from others.

Put another way, what real benefit is there to "freedom of speech, except for advocating fascism" as opposed to "freedom of speech, no exceptions"? The US already doesn't have pure unrestricted free speech because there are exceptions made for outlier situations where free speech is not protected in efforts to secure the safety of others. That hasn't led to total collapse or censorship.


I completely agree with what you wrote. I think too many people erroneously presume that free speech somehow guarantees an audience and acceptance, and when neither one happens then they feel their rights are being violated.

Personally, I feel there should be legal protections for all speech, but not protections from social ramifications. By this logic, I am totally fine with the idea of punching fascists. Which itself could be hit with assault charges, but then, how many juries would disagree with the reasoning for the violence? Let the masses decide for themselves, basically. It's what bothered me about the firing of James Gunn and Roseanne both despite their polar opposite politics. The companies that employed them cared so much about what the public thinks or feels that they couldn't be bothered to let the public decide for themselves to support either celeb or not. (And I know Gunn has since been rehired, but this happens so often, my point stands.)


In the case of ORG, it was exposed by a leftist research network that is primarily anarchist or non-authoritarian communist in nature. As far as I know this research network has no central authority, but rather a horizontal democratic structure.

I lean towards anarcho-communism myself, generally close to the original communist definition of libertarianism. So my answer to which authority should run things is "none". Facebook runs their own ship and are free to choose which content they will platform. While I would prefer a complete absence of hierarchy in all aspects of life, I think we can agree that this is probably not going to happen at FB.

I also strongly support anarchistic deplatforming and generally hindering fascists, nazis, racists and other bigots from spreading their noxious views. We have to realize at some point that some views simply aren't worth spreading.


> By deplatforming these people and driving them further underground and paranoid, we limit their ability to attract people through social media, for fear of exposure.

Do you have any evidence to support that statement? Would you expand on why you think so? I would estimate that by driving them more underground, you make them seem more cool in the eyes of angry teenagers. You also make them more radical because underground their opinions are not exposed to contradicting views. In my opinion, we should do exactly the opposite - we should give them platform to speak and oppose them. Isn't that what open society is about?


The number of people being radicalized by isis decreased and their social media reach disappeared.

There are a few studies in this thread that conclude similar.

The whole sunlight is the best disinfectant trope is only a trope, not a truth.

See also any conspiracy theory and the amtivax movement. When groups appeal to fear, not logic, as a recruiting tool, you can't logic people away.


Who's "we"?


For exploratory purposes, just imagining we can make an agreement. Don't worry, I don't like the concept of collective decisions either.


Just to put this out there from the beginning, I don't know what the exact appropriate balance between censorship and freedom of expression is. Also, FB can do what they like and don't have to listen to me.

With regards to ISIS, if their recruitment videos are publicly available, then people can contest them publicly too. They can explain why this viewpoint or whatever might be appealing to certain demographics, but look at this evidence for what happens to those people once they join.

The self-harm / suicide question is harder for me. I still think it's a good idea to have this happen in the public sphere so people can expose those who encourage such things in other people. However, if an individual is being targeted specifically then maybe there should be a line there. I don't know.

As for white supremacy, yes, I said "dialogue" when what I really meant was "speech". I don't know who Richard Spencer is, but if he's blocking comments then people can still post rebuttals in other videos.

I guess my main point is that it's better to confront such things openly then to try to hide them under the rug.

There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something. The danger, in my opinion, lies in trying to bury it. Then when people stumble upon them, or are recruited and given a login somewhere, they have access to only one side of the argument and are effectively in a bubble. In the public sphere where this plays out openly, they have ready access to conflicting information on the same platform.

I don't want a society where people aren't allowed to dislike me. I don't quite get where you're coming from. I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.

I wasn't alive 100 to 150 years ago. However, growing up 40-ish years ago, I remember hearing such things as "I don't agree with what you say but I will defend to the death your right to say it". Not something you hear much, any more.


> There also seems to be this idea that ISIS and white supremacists and whoever can magically infect others with their dogma as a form of mind control or something.

At scale, this is exactly the case. If the probability of the average person becoming radicalized is greater than the probability of the average radical de-radicalizing, then open discourse will, on average, increase the number of radicals until those probabilities equalize.

So now I ask you: how successful have you seen discourse at deradicalizing Jihadis, white nationalists, westboro baptist church members, anti-vax parents, or flat-earthers? Is it more, or less, successful than those groups at radicalizing new people?

>I want a society where people can say what they think and not get censored for it for the most part. It requires that we all come to terms with the fact that some people hold different opinions that we really hate.

Someone says "I want to diddle your kid". You tell them to never speak to you again. They say it to someone else. That person blocks them too. The same person keeps telling everyone that they're interested in child molestation. Free association says that we should all be able to shun that person because yikes.

But you say something different. That we shouldn't shun the potential kid-diddler. We should continue embracing them and their views, because shunning them censors them. And all views should be cherished and protected.

Or maybe you aren't saying that, but then its very tricky, because if everyone blocks them on facebook, they've been deplatformed. So why is that okay if facebook banning the person isn't? Or maybe it's that proclaim support for child molestation isn't okay, but proclaiming support for genocide is. In which case we're right back where we are now, just that your definition of "for the most part" is "expression of child molestation is bad, but white supremacy is fine". This brings me to my final point:

>I don't agree with what you say but I will defend to the death your right to say it

This only works when what you're saying isn't a threat to me. See how willing people 30-40 years ago were to defend to the death the right of black power groups to call for black empowerment. (hint: people got so scared the republicans banned guns)

If I start saying "we should kill all free-speech absolutists", how long are you going to defend my ability to say that? Only as long as I don't have power. Once I have the ability to actually carry out my threats, say, if I'm the president, do you really want me saying I'm going to kill some subset of society? Are you going to defend my right to say that? What if you think I might actually carry out what I'm claiming?

As long as the speech isn't a threat, people are okay with defending it. This is why no one really cares about flat-earthers, they aren't killing anyone, they're just the easy butt of jokes. Same with, until very recently, anti-vaxxers. Jenny McCarthy is kooky and their stuff is nonsense, but what's the worry? Well, measles, the loss of herd immunity, etc -> hmm, maybe we should stop this kind of thing.

Sometimes its possible to make the fix reactively. Kicking unvaccinated children out of school stops much of the harm that unvaccinated children cause, and in many cases forces parents to vaccinate their kids despite their views.

Unfortunately for explicitly violent groups, there isn't an easy reactive solution. You can't un-shoot someone.

As soon as people are threatened, they stop. "Expression of white supremacy is acceptable, but child molestation isn't" is just an expression of what you find threatening. What you're seeing is actually a shift toward more people speaking. Those who were voiceless before are speaking out and saying hey, this was really shitty before, let's not go back, and all the people saying otherwise are actually, truly threatening to me, so perhaps lets have them not say these things.

And on the other side you're seeing people threatened by this change in how people are considering speech, and feeling threatened, and saying "hey actually let's legislate these platforms so that I can keep spewing my garbage". White supremacists feel threatened by censorship, black people don't. That should tell you all you need.


The problem is who defines what hate and a hate group is. It is usually the people you don’t want to do so, people that want to tell you what to think. Eg to some Ben Shapiro, Jordan Peterson and Thomas Sowell are considered white supremacists.


That is a general problem but it doesn't address the argument. We already find ISIS recruitment videos and suicide ideation groups bad but where do we draw the line? If it's so difficult to define what should not be allowed in the open then I don't know what can be unallowed, including ISIS recruitment videos in the name of freedom of speech.


Talking about ISIS is a red herring. Are they considered white suprenacists now?

If we are to judge by social media companies past behavior [1] it’s regular conservative and liberal opinions that will be considered hate speech or white supremacy

[1] https://www.google.com/amp/s/www.foxbusiness.com/technology/...


The comment I originally responded to wasn't about white supremacy, but about censoring discourse. ISIS propaganda is as much "discourse" as white nationalist propaganda is, so why didn't you defend ISIS when it was deplatformed years ago? I realize that's kind of a gotcha, so I'll ask a similar question: given that it appears you feel that any deplatforming based on ideology is bad, do you criticize facebook for having deplatformed ISIS?

To pre-address your other comment, neither white nationalist propaganda nor ISIS propaganda necessarily includes calls to violence. So the incitement of violence standard applies to neither.

If not, what differentiates ISIS from White nationalists? Both are violent groups that wish to kill people different from them. As far as I can tell, one just looks a lot more like me. They're both dangerous, and we shouldn't let either recruit people on facebook. What is the flaw in that line of reasoning?

As for that case, it was dismissed earlier this month[1], apparently because the plaintiffs didn't actually have any examples of wrongdoing on the part of companies, and just filed the suit to raise awareness of freedom watch's advocacy[2].

[1]: https://www.pacermonitor.com/public/case/25495855/FREEDOM_WA...

[2]: https://law.justia.com/cases/federal/district-courts/distric...


ISIS has a clearly defined religious teaching and a core part of their message is using power as well as violence to push their religious teachings onto others to create a world Islamic kaliphate.

On the other hand what is being called alt right and white supremacisrs by mainstream organizations is bullshit:

- The economist called Ben Shapiro alt right today https://mobile.twitter.com/TheEconomist/status/1111248348114...

- The guardian says Jordan Peterson is supposedly alt right https://www.google.com/amp/s/amp.theguardian.com/science/201...

- Bret Weinstein is supposedly alt right

And ridiculously enough Ben Shapiro is a Jew that is the biggest hate target of the alt right, and Jordan Peterson is hated by the alt right such as Richard spencer and Vox Day.


You're including the alt right on an argument about white supremacists. Could you perhaps answer my questions without taking about the alt right, who aren't being discussed by anyone but you.

White supremacists also advocate violence against other races often to subjugate them. This is highly similar to isis.

Or are you claiming that all members of the alt right are white supremacists?


ADL defines alt right and white supremacy to be the same: https://www.adl.org/resources/backgrounders/alt-right-a-prim...

SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...

The opinions of SPLC and ADL is especially relevant because mainstream media and social media companies such as Facebook seem to have relationships with them as well as similar orgs to define actions against people.

So no, we are taking about the same poorly defined term because these orgs is an authority to Facebook.

The reason it’s bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.


>This vague term actually encompasses a range of people on the extreme right who reject mainstream conservatism in favor of forms of conservatism that embrace implicit or explicit racism or white supremacy.

Unless you're of the opinion that "racism" and "white supremacy" are synonymous, the ADL does not define them as the same.

>So no, we are taking about the same poorly defined term.

To be clear, the "alt right" isn't white nationalist, it is, however, a white nationalist recruiting tool. Sort of like how you don't just start out as a random person and then the next day you become Jihadi Jane. You're slowly radicalized. The alt right to full-on white supremacist pipeline is the same way.

You start off interested in self help, so you read 12 rules for life; then get caught up in Peterson's weird ideas about western culture; then pretty soon you're seeing not just his youtube lectures, but other lectures about western culture; and then videos about the decline of white/western culture; and then you're watching Richard Spencer; then you shoot up a church.

Now absolutely, granted, not everyone follows the entire path, very few people end up radicalized, but while I'll absolutely agree with you that Peterson isn't a white supremacist, he's absolutely a useful idiot for them.

But again, let's stick to people who have been accused of being "White Supremacists" since you're the one using the term alt-right, not Facebook. Which innocent people are getting accidentally confused for "white supremacists" specifically?


From the same ADL source on white supremacy:

> White supremacist Richard Spencer, who runs the National Policy Institute, a tiny white supremacist think tank, coined the term “Alternative Right”

Or from the SPLC source:

> The racist so-called “alt-right,” which came to prominence in late 2015, is white nationalism’s most recent formulation

Both clearly use them in the same breath.

We can argue semantics about one being the tool for the other etc etc. But that doesn't change the fact that they are being tightly related by organizations that is an authority to Facebook.

SPLC and ADL is very aware of its power, and has related much of the prominent dissenters of wildly different ideologies to what they claim is alt-right; Maajid (an arab muslim, they lost a lawsuit on this one and apologized), Jordan Peterson (an individualist liberal), Ben Shapiro (a pretty normal conservative), Dave Rubin (a classical liberal) etc etc.

SPLC and ADL are by this evidence too often bullshitters that don't relate what they say to truth, but instead relate it to their ideology that is not preoccupied with truth-seeking. It is really a shame their views are elevated like this as a justification for Facebooks and other organizations abuse of power.


You still haven't given an example of someone being called a "white supremacist". Would you please? Or admit that you are fearmongering.

Again, Facebook didn't use the word already right. They're talking about white supremacists. You're arguing that these are the same thing. So if that's the case, you should have ample examples of people calling Jordan Peterson a white supremacist also.

If not, then maybe they aren't the same thing, and your equivocation is unwarranted and you should stop attempting to confuse the subject.


With regards to examples I can do better than showing how people use it in the same breath. Here [1, 2] are examples of where SPLC specifically uses alt-right and white supremacy to describe the viewpoints related to Jordan Peterson, Ben Shapiro, Dave Rubin etc in the same article.

I've also shown that even the authoritarian sources Facebook trusts use alt-right and white supremacist in the same breath when defining the terms. These articles are no accident, this is what they believe.

This is not fear mongering. This is people that don't mind using their power to accuse viewpoint opponents for things they didn't do and with no evidence claiming they hold reprehensible viewpoints.

Edit: can't reply due to message depth limit, but the debate here is about facebook suppressing people like Jordan Peterson, Ben Shapiro, Dave Rubin using arguments made by SPLC and ADL such as the one in [1,2]. Why do they have the right to suppress other peoples viewpoints on dubious grounds with no recourse?

It is pretty clear from your arguments that you agree that they are related or as you say "a white nationalist recruiting tool" regardless of how you otherwise view them. The process facebook institute will therefore suppress these viewpoints based upon no evidence and by subjectively mischaracterizing their viewpoints, with no recourse for this power abuse.

[1] https://www.splcenter.org/hatewatch/2018/06/07/prageru%E2%80...

[2] https://www.splcenter.org/hatewatch/2016/08/25/whose-alt-rig...


Right so both of those article said exactly what I said: there's a process of radicalzation and people like Rubin (and Peterson) feed into that process.

Once more: please give an example of someone calling Jordan Peterson a white supremacist, or admit alternatively admit that no one has done so and you were fearmongering.

Not saying jbp associates with white supremacists, or serves as a useful idiot for white supremacists. That's all well known. You claimed that people called Jordan Peterson a white supremacist. You still haven't justified that claim, and that's because, quite simply, it's false.

You're fearmongering. No one has called Jordan Peterson a white supremacist. That's just a factually incorrect statement. They've said he unintentionally helps white supremacists, he occasionally associates with them, take selfies with them sure, but for all your trying, you still haven't been able to find someone actually call jbp a white supremacist.

So perhaps, just maybe, your worry is misplaced.


I’d like to more clearly find out where and if we disagree where it matters the most. Do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.


>Do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.

I certainly do, but evidence (e.g. historic enforcement patterns) indicates that Facebook does not.

Note that my opinion on this isn't global and without nuance. There are things that Jordan Peterson does and says that aren't, at all, controversial and are even occasionally interesting and thought provoking. Not everything he does radicalizes people. Same with, I assume, Rubin and Shapiro, although I don't pay them enough attention to know or care either way.

However, that's all beside the point. Facebook *doesn't appear to thing that they're worth censoring. I'm not sure why my opinion is at all relevant.

You still haven't given me that example, by the way.


Huh? How is it a red herring? Their propaganda is considered speech that is worth censoring. May be specifically naming ISIS makes this too easy. How about generally censoring radical Islamist propaganda that doesn't necessarily name ISIS specifically? The point that you're not really replying to is that people are apparently willing to censor speech for various reasons, one of them links to particular ideology. Is this okay or is it not?


Free speech doesn’t cover incitement to violence, and it is already illegal and that is why it’s a red herring. Advocating for violence against innocent people is not primarily ideology, but someone being a bad human being that should go to jail for breaking the law.

You really don’t need hate speech and white supremacy policies to prohibit speech that is already illegal, but you need it to suppress speech that some people might disagree with when they want to abuse their power to suppress it.


Regarding the first paragraph, I know, but that isn't under discussion. What is under discussion is censorship of speech due to ideology, perhaps because it leads to violence if although there are no direct calls for violence. White supremacy and radical Islamism both need not call for violence, but the ultimate conclusion of the ideology is violence. It sounds like you don't want to censor either, which I think is consistent.


You are still assuming a clear definition of what white supremacy is, which Facebook doesn’t provide.

How do you think about the false positives?


ADL defines alt right to be white supremacy to be the same: https://www.adl.org/resources/backgrounders/alt-right-a-prim...

SPLC also say they are the same: https://www.splcenter.org/fighting-hate/extremist-files/ideo...

These orgs provide the best definition we can get right now due to Facebook relying on others to define it instead of providing clear definitions, and these orgs is an authority to Facebook. If there was ever an attempt at using power while abdicating responsibility that is it.

The reason while any action build upon this is bullshit is that these orgs doesn’t relate it to truth and evidence, use their power to slap this label onto anyone that disagree with them and punish people for things they didn’t do.


You were given a definition of white supremacy below and elsewhere so I won't repeat others. It just seems like under this sense of freedom of speech we shouldn't censor radical Islamism or white supremacy if we can't ban speech merely due to ideology leading to violence. That is a consistent and fair sense of morality I think.


That is a bit extreme from my worldview and not necessary to be more balanced. I am arguing that facebooks process is biased, and facebooks authoritative sources are unaccountable to fix their own mistakes and ideologically biased in a way that is not evidence based. A better process with an adversarial appeal with authoritative viewpoint opponents would have compensated for some of this.

To figure out if we disagree where it matters the most, do you think it is justified to ban or suppress content on social media from people that SPLC/ADL view as part of the radicalization process? Eg jordan Peterson, Ben Shapiro and Dave Rubin.


free speech in US does actually cover incitement to violence. in Europe it doesn't


Incitement to imminent violent action is illegal https://en.m.wikipedia.org/wiki/Imminent_lawless_action


Well, ISIS is a criminal organization for reasons unrelated to their speech. Once they have become one, it's not unreasonable to make their recruitment videos illegal to distribute to other people, on the basis that doing so amounts to conspiracy to recruit people to commit crimes (but mere possession and watching them should still be legal, because in and of itself that doesn't help them).


What happened to Heather Heyer was also not speech. What happened to Clementa Pinckney and 8 other African-Americans at the Charleston church shooting was not speech. What happened at the Tree of Life Pittsburgh synagogue shooting was not speech.

Shouldn't the same argument apply to the videos that radicalized those murderers?


No. Why should it?


My parent comment had said that conspiracy to recruit people to commit crimes is not an unreasonable basis for making it illegal to distribute recruitment videos for ISIS, because they are a criminal organization due to their violence, not their speech. That implies that that should also not be an unreasonable basis for making it illegal to distribute recruitment videos for violent white nationalist organizations.

Does that answer your question?


Yes, thank you for explaining. I misunderstood the context of your question.


If you can prove that any particular video did so, sure, I don't see a problem with that.


are you blind? white supremacists kill more people in the west than ISIS


You can improve this comment but removing the question and providing a source for your claim.


Rubbish. You're claiming that "white supremacists" kill more people than the Bataclan murders, the bombing of the Ariane Grande concert and a host of other atrocities (Sweden, etc.) carried out by self-proclaimed ISIS supporters.

That's not only wrong it's highly offensive.


You are offended by an unsupported claim, and proceed to make an opposite unsupported claim. Now I'm offended not by your assertion, but by your lack of substance.

I googled for "extremism murder statistics": This is only the US [1], and I have not looked at methodology, but it supports GPs claim, not yours. Please discuss, but do so using substance, not master suppression techniques.

[1] https://www.adl.org/murder-and-extremism-2018


Well I think the problem lies in that all his examples were from outside the US, and not only during 2018 which is when and where you gave an example. Here's another source that is only in the US that seems to agree with Wildgoose [1](pdf) from 2010-2016. This source certainly doesn't count as many deaths (note your link said their were 313 deaths from right-wing extremists from 2009-2018) it only goes from 2010-2016 saying there were 140 deaths (note your link also says 50 deaths happened by right wing extremists in 2018). I'm not sure what the discrepancy is there in not accounting for all the deaths from your link. This link states that from 2010-2016 68 deaths were from Jihadist-inspired terrorists while 18 deaths were from white nationalists and extremists. Yet again it would be nice to see data from the whole of the West.

Just to give you another source from the same people which seems to agree with Wildgoose's parent atleast with its reasoning (albeit by making a few qualifiers on the dataset that bend it to be in favor of that reasoning) [2] from 2001-2016 in the US.

[1](pdf): https://www.start.umd.edu/pubs/START_IdeologicalMotivationsO... [2]: https://www.start.umd.edu/pubs/START_ECDB_IslamistFarRightHo...


Wait, are you claiming they’re not? Lol


I do not have to claim that an untruth is such. It is rather those peddling untruth and statements not relating to truth at all (bullshit) that have to confuse everyone else to a sufficient degree to twist their worldviews for their purpose.

Unfortunately trying to control others viewpoints hurts yourself as much as anyone else because you are making yourself and what you can learn from others less adaptive to reality.


Is it trying to control others’ viewpoints to say that the KKK is a racist, white supremacist group? I thought we’re all in the marketplace of ideas here. You are literally telling me what to think, which is what your original comment purported to be against.


Show me one example of a white nationalist action by Jordan Peterson or Ben Shapiro, and I’ll be right there beside you fighting it. It’s on you to justify your claim.

Marketplace of ideas is not about throwing out unfounded claims and with a high likelihood have others treat them like a proposition founded in truth seeking. Rather it will be treated as the bullshit it most likely is unless you supply evidence to the claim.


> It just makes it fester somewhere else that isn't as visible.

That's a win in this case. It already festers somewhere else. White nationalists would like to spread their ideology outside of their normal bubble. They often talk about "redpilling normies." It's harder for them to do that if "normal" people have to explicitly seek out that kind of speech.


>It already festers somewhere else

excluding people from a conversation doesn't help them learn why/how they think is wrong.. those "hidden" places that let the wrongthink fester only create more actions brought by wrongthink (guess what they talk about? wrongthink).

i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions

someone explain to me how banning content/people = deradicalizing


White supremacists use Facebook for the purpose of recruiting and radicalizing people who are not yet white supremacists. The point of a ban like this is not to deradicalize people who are already white supremacists, it is to make it harder for people to become radicalized in the first place. People do not typically seek out white supremacists; white supremacists are always looking for people who are vulnerable to their message. Facebook seems to have come to the conclusion that their platform is just too useful to such terrorist groups and have decided to ban them before the problem gets worse.


>it is to make it harder for people to become radicalized in the first place

I've been in fb group chats with holocaust deniers etc. (mans literally inboxed me a youtube video, I never bothered clicking the links he sent but it was funny)

"banning white nationalist content" is a cute headline but doesn't hit the problem of seemingly benign normal users that would never "reveal their power level" slowly radicalizing their friends with content not on their site lol


I said "harder" not "impossible." Yes, white supremacists are still going to use Facebook and are still going to try to radicalize people, but it will be harder.


"harder" not "impossible."

The Rhetoric Tricks, Traps, and Tactics of White Nationalism https://medium.com/@DeoTasDevil/the-rhetoric-tricks-traps-an...

the overt white nationalism content you think of doesn't really exist, i guess maybe if ur a boomer it still lingers? idk


I sure saw a lot of overtly antisemitic, racist, white supremacist imagery in that article...


those memes are really old, theres more subtle content that radicalizes people

its less orchestrated and just friendly until a point


>i dont understand the logic of removing someones ability to converse if they dont understand why they're wrong.. they cant ask questions.

I don't think I've ever seen someone already radicalized reason their way out of it through conversing on the internet. I'm guessing the calculus is that exposing this content to people makes it easy to be suckered in, but we don't see the opposite affect.


You should read this awesome New Yorker article from several years ago: https://www.newyorker.com/magazine/2015/11/23/conversion-via...

That said, I think I generally agree that conversion via social media is unlikely. Still, I’m also not sure about my position on banning people. But the above article is a great read.


The Phelps people are disgusting, but they are not terrorists. The thing that is missing from most conversations about white supremacists is that their hate goes beyond words. These are terrorist organizations whose members have committed one violent, murderous act after another, year after year. Banning white supremacists is no different from banning ISIS.


Makes sense. But Westboro is radical, and the above poster had been talking about radicalization, not terrorism.


Most people do not critically evaluate content, especially if their own ego or group identity is involved.


I was waiting for the "only idiots get suckered into wrongthink" comment


Then I'm an idiot too. There are a whole slew of cognitive biases that cause us to prefer our in-group.


welcome to the club!


Actually, their point isn't "only idiots" more than it is "everyone." Cognitive biases are real and hard to overcome and we all can succumb to them.


1hour later

thats how quick it can take to slowly expose people to content and how useless it is to "ban white nationalist content"

v cute headline tho


outbound content is dangerous ban outbound content to make it "harder" but not "impossible" to expose "idiots" to "radical content" https://imgur.com/o14rjegrhk21.png


> It just makes it fester somewhere else that isn't as visible.

Is that true? Would we have the anti-vax movement if it weren't for Facebook & co.?


We had the anti-vax movement before Facebook.

And if it were banned, they'd just spin it as the pharma companies paying to suppress the truth. It would only further confirm their beliefs.


That "spotlight is the best disinfectant" is a very convenient narrative. I do realize in the end we all want to believe in whatever is convenient for us (notably including anti vaxxers). But is there actual scientific evidence supporting that particular narrative to work?

Through all of mankind we had strong mechanisms to form consensus, including social repercussions. Those don't work anymore, since what would have become outcasts in earlier generations can now easily (for example on facebook) find like-minded communities and fulfill their social needs/get approval/etc. It'll be interesting in how different realities people can believe in before society breaks apart. I'd prefer we wont let that happen.

The question is what actually works. Censoring them might reinforce their believe, but I can accept giving up on some if it does effectively stops the spread.


> Through all of mankind we had strong mechanisms to form consensus, including social repercussions.

This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology. What kind of things were supressed the hardest? Sexual Deviancy. questioning authority. Questioning relgion. You really want this kind of society? I think maybe we can stand some antivaxxer ...


> > Through all of mankind we had strong mechanisms to form consensus, including social repercussions. > This is a dangerous, dangerous path to go down to if you belong to any kind of enlightment inspired ideology.

Mechanisms to form consensus does not necessarily mean rule by mob, quite the opposite. Positive consensus mechanisms can be trust in the scientific method, institutional credibility, and acceptance of reason. These mechanisms can support an enlightenment ideology, not prevent it.


I wasn't talking about mob rule. I was talking about an orthodoxy that's ruthlessly enforced by those in power. Cause that's what it was before.

"Freedom of speech" is not an accident. Enlightenment thinkers have been pondering this for 200 years and more and objections have been successfully adressed over and over again. It's as much of the type of consensus you describe as we will ever have. 'But computers' is not sufficient to just do away with it.


Pretty naive if you apply this conclusion to many countries in the world.


> The question is what actually works.

Make them pay a material cost: link vaccination to welfare benefits/family tax rebates, works well in Australia (search no jab, no pay). Also make an up to date vaccination record a requirement for enrolment into schools.

They will complain and might keep on spouting shit, but at the cost of ~$10k/child/year they'll change their actions pretty quick.


this is what the free market would do but I thought we don't want that because 'everyone should have affordable healthcare'. Isn't an antivaxer 'everyone'? It's kind of spooky, progressives convince themselves they have no ingroup bias by making their ingroup unreasonably large but the price for that seems to be that they un-human people who have a different view.


> Through all of mankind['s history,] we had strong mechanisms to form consensus, including social repercussions.

So, you're saying this mechanism is a good thing? Because mechanisms that reinforce the current social consensus, whatever it might be for that era, tend to maintain the status quo. And a desire to maintaining the status quo is a big chunk of the philosophy of, yep, that's right, ... conservatives.

I've often said progressives aren't liberals because they don't agree with liberal philosophical values of the Enlightenment. Their "maintain the current societal consensus" argument is the strongest evidence of that yet.


>Through all of mankind we had strong mechanisms to form consensus, including social repercussions.

And throughout history people lived to be 30 and died of plague wallowing in their own excrement thinking the devil did it.

If your ideology needs thought control to work it needs to be taken out at the back of the sheds and shot. The second a society stops discussing ideas openly is the second it starts sliding towards a new dark age.


In fact, the anti-vaxx movement is more than a century old:

https://www.historyofvaccines.org/content/articles/history-a...


Maybe it wouldn't be as big but I can assure you nothing gives a conspiracy theory more credence among it's followers than being deplatformed.

Larrikin 27 days ago [flagged]

This is what people said about Alex Jones and his network. When he was deplatformed there was a surge of downloads for his app from white supremacists trying to show support and people gave your exact argument that now he has been legitimized.

But what actually happened were those numbers were inflated by his followers and without the echo chamber of the larger base he had on YouTube his numbers eventually plummeted. Instead of being a regular in the news he occupies a small corner of the internet without access to the larger impressionable ever refreshing base he once had.

He has now resorted to trying to sneak videos back on to you tube.


[flagged]


MLK was heavily persecuted in the sixties for his views and message by private and public entities. What point are you trying to make.


> MLK was heavily persecuted in the sixties for his views and message by private and public entities.

And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?

harrumph 26 days ago [flagged]

>And people should be okay with his persecution by the people who had the political upper hand in that era? No? Then why should anyone be okay with deplatforming Alex Jones (as bad as he is) by whoever has the political upper hand at the moment?

Because one was a indispensible paragon of civil rights and an avatar of anti-racism and the other is a shrieking whackjob entertainer peddling freeze-dried survivalist food while spreading lies, mental illness and grief to families of murdered kids.

Hey, you're the one who brought up history.


And when the next generation's MLK shows up and gets persecuted again by the powers that be, except now there are even better deplatforming tools to silence him? Whoops, game over.

Hey, you're the one who failed to think things through. The tired old "b-but deplatforming will only ever by used against _bad_ people" argument is exactly a failure to appreciate the lessons of history, of which MLK is an excellent example.

harrumph 26 days ago [flagged]

>Hey, you're the one who failed to think things through.

No, I'm the one who corrected an utterly insane equivalence drawn between a commercial figure who drives grieving fathers of murdered kids to suicide and a man who was literally killed for his demanding equal treatment for all people. Again: if you bring up history, you are now forced to deal with historical results and categories. Deal with them. Don't just pretend you are.


I don't see a valid counterargument in your response?

It's ironic that, on a site where the UNIX philosophy is widely appreciated, that so many fail to appreciate the wisdom of the old quote "UNIX was not designed to stop you from doing stupid things, because that would also stop you from doing clever things.". So too, with Alex Jones, MLK, and freedom of speech.


I upvoted this comment because I saw people were downvoting it.


I wonder what would happen if they flagged it the same way my thermostat flags my monthly energy usage.

“Most of your peers consider this post racist neonazi propaganda.”


> Facebook can do what they want, we're not talking about the government here and I understand that

That isn't so clear: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=139661


Scott Alexander's theory is that having an open forum has become incredibly difficult, maybe tending towards impossible. All the incentives line up to make moderation steadily more costly or steadily more draconian over time, or both:

https://slatestarcodex.com/2019/02/22/rip-culture-war-thread...

It's not clear how to preserve a forum for free expression while spammers or trolls seek to suck up all the available bandwidth for their messages. You can end up with a sort of spectrum allocation problem, where if everyone is allowed to crowd the same frequency then communication on it becomes impossible. But once you start moderating content, it's really hard to find a satisfying balance. And on the margins, some trolls are indistinguishable from some people with really unbelievable opinions.

I don't know what the answer is, it seems like a really complicated problem.


From the release:

> Our own review of hate figures and organizations – as defined by our Dangerous Individuals & Organizations policy – further revealed the overlap between white nationalism and separatism and white supremacy.

It doesn't appear that this content is being banned because Facebook surrogates find the politics in question abhorrent (though they likely do so find). It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence. They collected evidence of this before acting.

The line is somewhat blurry, but it does exist. On one side, we have a sensible use of editorial discretion; on the other, we have censorship. Is Facebook going to cross the line at some point? Almost certainly. Basically every individual or entity to have editorial discretion has both made mistakes and abused that power.

It remains to be seen to what extent that will happen here. If anything, they've erred on the side of permissiveness so far (e.g. how long it took to ban ISIS). If that changes, we should call them out at that time.


> It is being banned because it is acting as part of a recruitment funnel for an ideology that openly advocates violence.

That would be believable, if it was applied as such across the board - e.g. also removing Christian Dominionists, Stalinists, Salafi Muslims, NoI etc.


Facebook already deplatforms radical Islamic preachers on a regular basis.

I doubt that have enough data to show the links for the other groups.


Are you sure about that? Try searching for "الموت للغرب" on facebook (Death to the west)



This is "terror content" though, not the ideology behind it. I tried looking around, and there are plenty of pages preaching the ideology. It sounds like they're okay so long as they don't directly call for violence, showcase violence (execution videos etc), or try to recruit for either of those, even though the end goals are inherently violent. Most white supremacist content is similar in that regard.


I'd note that "Death to the West" has a very mixed set of meanings.

Often is isn't meant as a threat, but instead is a political statement calling for the end to the hegemony of the West. However, sometimes it is meant as hate speech (often when associated with calls for the destruction of Israel).


I don't know about Salafi Muslims specifically, but Facebook already does censor radical Islamist groups like ISIS. This move is trying to be more across-the-board by including white supremacy under more of its aliases.

As for the others--when's the last time a Stalinist drove a car into a crowd or committed a mass shooting?


[flagged]


The argument being pushed here (whether they themselves understand it or not) is basically that "seeing something a little bit to the right > leads a little more to the right > then even more to the right > NAZI!"

But it's not at all! Viewing racism as a left/right thing is wrong altogether. While it isn't obvious in US politics today, the left has often had a problem with race too.

Racism should be abhorrent to both left and right, and that should be independent of immigration views.

Which hilariously is a concept born almost exclusively out of modern leftist and current "_blank_ studies" thought/ideology.

Well actually Facebook has data showing this, so I don't think that's correct.


Because Facebook can misapply its policies does not mean it should have no policies, it merely means it should implement them better.

Obviously there are other kinds of content that Facebook could ban, but merely because those things exist doesn't mean this shouldn't also be banned.


"it merely means it should implement them better" this is the theory of every government committee, ever. In practice, generally, _it doesn't happen_.

I believe private companies are usually better, but still highly dependant on size and a number of other factors.

I definitely don't have faith that FB will wield this well, and the worse part is it will probably pretty hard to find out from the silenced parties when they f!@# up until much later.


why not just have their policy be the first amendment and they abide by the same rules as the country that created them? Why should corporations take on censorship powers that are prohibited for the government? Why should a private for-profit corporation controlled by one man have more power than a transparent institution controlled by democracy?


Governments have vastly more power over their citizens than any corporation. Facebook cannot imprison or kill users that violate their rules, and being banned from Facebook is not all that big of a deal (I have not had a Facebook account for more than a decade, and honestly I feel no particular need to go back). Moreover, if you required private for-profit corporations to adhere to the first amendment, you would quickly run into problems. Scientific journals could not exist -- the first amendment comes with no requirement for scientific rigor. Newspapers could not exist -- the first amendment has no requirement for journalistic quality (hence the legality of fake news).

The framers of the constitution could certainly have included a requirement that private institutions follows the same rules as the government. There were certainly big and powerful non-governmental organizations at the time -- the Catholic Church, for example.


I think the standard answer is that corporations are less restricted because they are not able to enact those policies through force. The government can fine you, imprison you, etc. regardless of what you think; however, you don't have to use Facebook in the first place.


that is the default answer but there comes a point when you have to reconsider it in light of the facts of a particular situation.

Take email for instance: first class mail is protected by the 5th amendment, and a warrant is required to open and read first class mail. Move communications to a private digital platform and the presumption of privacy is completely flipped into a presumption that all your emails will be read and stored for future reading. When all communications go online that leads to a very significant change in how the law applies to private communications and that is exploited by the government just as much as it is exploited by the corporations. After many years of there being a free-for-all where law enforcement and intelligence claimed they could view anything online without a warrant there was finally some push back by courts to reassert the rule of law, but email still has substantially less privacy than first class mail.

Speech on public forums is a similar case. The 1st amendment applies to the town square but if a corporation creates a digital town square then it gets the right of censorship. Then the government applies pressure on the corporation to censor on their behalf and the government begins to exercise a power it did not previously have. That was not a problem when online platforms were small and inconsequential, but when they grow to billions of users globally, then the power to censor speech on those platforms becomes quite influential on the exercise of real world power.

Corporations do not have police powers but they are subjects of governments that do, so once they get power they can always be made tools of the people that hold power over them.


No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future? Or if the decision maker sensed your intentions or sarcasm or if you don’t beleive your speech to be hateful but someone else does.

The whole point of defending speech you don’t agree with is it promises that arbitrary rules won’t be used when YOUR speech runs afoul or someone else’s feelings.

Let me phrase it to you like this... if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?

Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.


"Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with."

The problem is not that I personally disagree with white supremacy. The problem is that white supremacist organizations are a significant and growing terrorist threat, and they are using Facebook to recruit new members. This is no different from the various platforms that banned ISIS.


Actually, while arguably it may be growing (kind of hard to tell with single and no digit yearly numbers) It isn't all that much of an issue overall or even relatively speaking.

https://www.bbc.com/news/uk-47626859


Did you read your own link?

3 of the 4 graphs against time showed right-wing extremism increasing, in Western Europe and North America, the UK, and the USA, respectively. The only graph that didn't show that right-wing extremism is a growing problem instead showed that it has been consistently bad in Germany, worse than the other two categories combined (in that graph, the other two categories were left-wing attacks and "Not Identified").

What's arguable about it?


It raises through animosity mostly. It had been on the net for years without any significant effect. Rumors have it they tried to recruit some weeaboos, but they were too smart.

The poster is pretty much correct.


Did you mean to reply to someone else's comment? This doesn't appear to have anything to do with my comment.


> No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future?

I don't.

My view on this action is a view on this action, not every potential future action Facebook might take in the future that is loosely analogous.

> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is.

Yes.

I also think that they would do a horrible job of it, and the board of Facebook should not choose them for that job. I also think Donald Trump does a horrible job directing the policy of the executive branch of the US government, but I don't think that fact means the the President of the United States should not direct executive policy.

If I thought that no one should have any responsibility or authority if Donald Trump would fail in the responsibility or misuse the authority, then, well, I wouldn't allow anyone to do anything.

> Maybe it’s not Facebook’s job to ban legal things you personally don’t agree with.

It's absolutely Facebook's job to decide what messages they are willing to relay on their platform. That's a direct consequence of the First Amendment.


The part where if people who you strongly disagree with were making the rules that you insist you would still be ok with that - is intellectually dishonest.


> how do you know it won’t be used against you in the future

I think it's foolish to wring your hands over the arbitrary machinations of a multibillion dollar corporation's online platform. They don't care about you or me and frankly they have no obligation to do so. This logic is akin to complaining that McDonalds puts too much lard in the fries, maybe that's true, but if that's the case then don't give McDonalds your business; the fact that there is a McDonalds on every street corner doesn't mean they are obligated to modify their business process to be within the parameters of your approval.

> if Mark Zuckerburg who you may generally agree with leaves Facebook and is replaced by let’s say Donald Trump and his assistant Mike Pence; do you still think it should be Facebook as a service provider’s policy to gauge what offensive speech is?

That's Facebook's prerogative. I don't really use Facebook, but if I felt like Facebook was unfairly targeting me I'd stop using their platform, if they elected Trump to the board of directors I'd stop using their platform. The solution to all these Facebook problems is very simple: stop using it.


> No, it means that while you can cheer this decision today, how do you know it won’t be used against you in the future?

Because I'm not a white nationalist? Facebook already bans child pornography, pro-ISIS content, and doxxing. Does anyone seriously believe that Facebook is slippery sloping to banning all speech?

But let's say that Trump and Pence do indeed take over Facebook and make it so they ban all users that don't loudly praise Trump. If they do, honestly, so what? Facebook can't imprison you or legally remove your property. The worst they can do is ban you from their platform. Which, well, that sucks, but there's lots of sites on the Internet. In fact, you could go ahead and legally make a non-censorship Facebook.

...of course, sites like that already exist, in this world, not in the Trump/Pence Facebook world. And they're dominated by child pornography, white nationalism, and doxxing. No one at large uses them because they're horribly toxic and disgusting. Almost like some rules about content are actually helpful for sites on the Internet.


It seems you and I go to different websites; and I’m quite happy about that.

But I beleive my comment said legal things. Having a different opinion is still legal, for now. You jumped to all sorts of illegal things.

And my context was what if “the bad people” are suddenly running these services or in positions of power and relative to them, you are the one with hate speech? Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.


Pro-ISIS speech isn't illegal but it's still banned. Pornography isn't illegal but it's still banned. So the legality argument here isn't particularly compelling.

> Perhaps it shouldn’t be up to them, or you, or anyone to police what someone else MIGHT find offensive.

You have got to be kidding. If it's on a platform I control I can make any kind of legal judgments about the content I find acceptable. And if you're banned, again, so what? Facebook is not a government or even a government-adjacent entity: I have no right to participate on the site, and they have no obligation to platform my speech.


Now I'm curious about what non-censorship sites you are using? Because the only ones that I am aware of are full of terrible people. If you have one that isn't over-run by the bad guys, please share it because others here might want to use it too.


Sarcasm and bullying aren't mutually exclusive.


Indeed. In fact, it's a rare bully who won't protest that they're "only kidding!" whenever they're called on their bullying. And in their mind, they probably were only kidding.


> That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

That line is grey and I'd argue that your friend's comment, while obviously hyperbole, does not respect human life.

There was an article posted about cyclists that points out the danger ous dehumanizing others; that comedy / hyperbole help group think achieve it.


So we are banning jokes now?


I think it's pretty reasonable to ban jokes about killing people. Maybe it was an overreaction here, but I certainly try to avoid even jokingly saying things that could be interpreted as a death threat if taken seriously, especially on the internet where it's easy for tone and context to get lost.


Just the unfunny ones


Of course we are. Where have you been the last 5 years?


> That was considered a violation of their anti-bullying stance. Knowing her, it wasn't. It was sarcasm.

That's what bullies usually say when you confront them.


A one off joke, or even mean remark, is not bullying.


Why should we tolerate any mean remarks at all?


How much would it cost to hunt [insert group of people here]?

Even if someone commented that as "sarcasm" it would look pretty bully to me.


Nope. You misunderstand what bullying is. You can't bully an abstract group.


In principle, who could be opposed? In practice, how will this actually work out?

We've had Nazis and Klansmen effectively shut out of public discourse for decades. They aren't on TV. Their ads don't run in newspapers. If you go shopping in a pointy white hood and a swastika armband, the store (or the shoppers, more insistently) will likely ask you to leave.

Facebook (and similar) are the ones with the odd new 'principle' thing they tried which is 'it's ok for the hyper-overt bigots to shit everywhere with no consequences'. In practice, it has not worked out well so they are belatedly changing it.


Blegh, the worst is coming in here and reading people criticizing and outright banning the use of sarcasm, or implying that they know your friend better than you and that she definitely literally meant what she said.

This is the end of freedom and, along with Europe decisions recently, the beginning of all the worst dystopias we have already listened/read/watched about.


I don't like recent trends either, but you really need to read more.


In principle, Facebook has never branded itself a bastion of free speech.

In practice, we've seen a lot of examples across YouTube, Facebook, and Twitter where things are blanket deplatformed under the guise of policies like these, and it's concerning.

One thing is for certain - Facebook can't possibly do a worse job than YouTube has at policing content.


If only Facebook focused its products on encrypted, private conversations, they wouldn't have to worry about / could claim to be incapable of policing content...


I promise you, people would still want to police communications like the kind that are being discussed here.


Corporations and organizations always had the ability to police their content to their own desires. A more interesting question is when does that become perhaps ethically wrong; Are people against the big companies policing content _in general_ (and thus likely against the first statement), or at people against the big companies policing content that violates popular definition of free speech once they reach a certain size?

I mean, we're discussing this topic on a site that highly polices content, albeit in one that does in different ways, but such different ways that it attracts the very commenters in this thread to read and respond to content rather than visiting other sites -- or any at all.


Until there are objective, rationale ways to determine what content to ban and what not to ban i’d prefer that we stay clear of using filter based on “I know it when I see it”.


If we did that with pornography we'd still be swimming in porno ads.


> Knowing her, it wasn't. It was sarcasm.

Did she know that the person she directed that comment to did not know her?


You don't have to know anyone to read that comment as satire. I suppose Facebook would also have banned Johnathan Swift for his "Modest Proposal". https://en.m.wikipedia.org/wiki/A_Modest_Proposal


But people do literally post on Facebook that they're going to kill someone and then follow up and do it. Remember Brenton Harrison Tarrant?


A tiny minority... who would've done it anyway even without Facebook.


Surely you see there's a difference between a Modest Proposal and posting a Facebook comment suggesting someone should die with no other context informing people it's a joke?


Doesn't the context of billionaire endangered animal hunter inform us that it may possible be not-serious.

Surely?

On the other hand, there's bound to be people who believe fair-trial-judicially-decided capital punishment would be fair retribution for anyone who intentionally kills endangered animals.

I'm going to say something like: I'm against capital punishment because it tends to kill innocent people at least occasionally, not because it isn't often deserved.


The point of a comparison is that there is a parallel, not a difference. There is always a difference.

What is the parallel? In both cases, people who did not recognize the author's point took offense. (And yes, a lot of people were seriously horrified by Swift.) And in both cases the author's actual point was close to the direct opposite of the one that those people thought. Some people found obvious, others didn't. To people who found it obvious, it can be surprising that it wasn't obvious to others. People who didn't find it obvious think it a horrible thing to say.

In this case the comment is directed against the justification that the billionaire offered that it is OK for him to kill the animal because he paid lots of money to do so. And her point is that just because you pay lots of money to do a wrong thing, doesn't make it OK to do that wrong thing. To see it, put the billionaire in the animal's shoes. How much money would it take to make hunting OK? Obviously no amount of money would suffice! Just as the billionaire's having paid lots of money didn't make his hunting OK either.

That said, this one actually gets complicated. The money from these hunts goes to anti-poaching efforts. So the billionaire kills one animal, and his money saves others. Which still makes the billionaire a shitty person, but there is a utilitarian argument for allowing it.

That said, how would you feel if we were talking about hunting children dying in a famine instead of black rhinos on a preserve? The same utilitarian argument applies, but I think most would be for putting the billionaire in jail. How you feel about that is likely close to how that friend feels about what actually happened.


Ultimately I'd prefer nobody was threatening kill anyone in any context on a social media site, no matter how witty they think they're being. It doesn't seem like it should be such a high bar to set but apparently it is.


The thing that makes it an obvious joke is that it's a simple reversal of a previous statement. That is a very common pattern for jokes.


Define "directed that comment to".

I am quite sure that Texas billionaire Lacy Harber does not know her.

I am also quite sure that the person who shared the post she replied to about his hunting an endangered black rhino did know her.

It would also be a safe bet that some friends of friends who saw that comment did not know her.

I would say that she "directed that comment to" the second. It is impossible to tell who reported her, or what they thought.


> It is impossible to tell who reported her, or what they thought.

Wouldn't Facebook know these things?

Under what circumstances do we have a right to confront our accusers?


The whole point of an anonymous complaint system is to allow accusers who might be intimidated by the idea of confronting you to have a way to get their concerns met.

Which means that anyone who has created an anonymous complaint system has traded off your right to confront an accuser with the accuser's right to not be intimidated and decided in favor of the accuser.

However there is usually another counterbalance, such as having the accusation silently disappear unless a neutral third party thinks that there is a point to the accusation.


> In principle, who could be opposed?

Ends vs. means, buddy. Ends vs means. Every single political topic on HN seems to be arguments between people who can't separate the two, and those who do.


Or say some Spanish politicians comments re Gibralter basically right wing nationalists who want use Brexit as excuse to annex Gibraltar.


Seriously tho, how much would it cost to hunt Texas billionaires? And world billionaires?


> Moving on to this policy, I'm happy to see neonazis and the KKK have a hard time. But there are people in the UK and EU who would see Brexit as being a separatist cause motivated by racial animosity.

No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".

> And remaining banned even for people who support it on economic grounds because they think that having to follow EU policies on GDPR, copyright, and so on will be a net negative for the UK?

This is a non-issue and just scaremongering, not all that different from "they allow sex with children next!" from the anti-gay idiots (no one is confused between gay rights and "pedo rights").


>No one is confused about the difference between "I want to kill all Jews" and "I think the British economy would be better off with fewer Polish plumbers".

You clearly haven't been paying attention to modern day discourse over issues like illegal immigration and such. If you listened to some people (unironically the same people pushing for stuff like this) having an issue with illegal immigration is basically treated like a confession to guilt. And that's even ignoring some of the newer types of arguments they are making.. That merely holding an opinion that could be seen (solely determined by them) as problematic or a "whistle" (in effect anything to the right of left-of-center or non-PC) and thus a "gateway to extremism". So "hate speech" needing to be banned next.


I have been paying a great deal of attention, and it turns out that there is more than an insignificant overlap between actual "kill all Jews" Nazis and regular anti-immigration crowd.

Furthermore, as soon as the literal neo-Nazis get criticized the anti-immigration crowd either outright jumps to their defence or shrugs any concerns of. Trump's "very fine people" remark is a good example of that, or a recent article which "proved" anti-conservative bias on Twitter by pointing out that people like David Duke (former KKK grand wizard) or Richard Spencer (literal neo-Nazi who wants to forcibly eject all Jews and Blacks from US) got banned from Twitter.[1]

So, if you don't want to be treated like a duck, then don't walk and quack like one. How else am I supposed to interpret an article defending literal neo-Nazis as "Conservatives treated harshly on Twitter"? Look at the data that is presented[2], it literally includes the the American Nazi Party. Now, if you want to make the argument that we should allow these people because "muh free peach" then okay, but read that article again, it just talks about "Conservatives" and "Trump supporters" (aside: there are other problems with this "study" as well, such as not including various Liberal accounts that were banned for unstated reasons).

I also agree that sometimes anti-immigration views are brushed aside as "racist" far too quickly, and it annoys me as well. But this kind of confusion is a bed of your own making.

I am not as pro-immigration as you'd might think based on the above, I think there are some real problems caused by both legal and illegal immigration, and that they should be addressed. But it's plenty evident that the anti-immigration right has long since been infested by some very nasty people, which is doing a great disservice to anyone else, especially those with anti-immigration views.

I suggest you first take so effort to purge the toxicity before complaining about "PC".

[1]: https://quillette.com/2019/02/12/it-isnt-your-imagination-tw... [2]: https://docs.wixstatic.com/ugd/88a74d_d231bdbfb13c4b9ab77422...


I don't even need to write up a full reply, because you went and proved my point perfectly.

Idiots like David Duke and Richard Spencer make weak strawmen here. And i'm not even going to play into that game.

The problem is that paying better attention to whom you... defend(?), works both ways. Certain groups and people are literally believed and "protected" by the mainstream media by default, when they are clearly coming from a pretty strong POV.

>your own making

No it's a silencing tactic, no more and no less.

When one side stops calling everybody else nazis, then maybe we can all come together and have an adult talk about "toxicity" but that's not going to happen as long as some keep acting like little children on the playground calling others names to shut down debate.


I'm not shutting down any debate, I am pointing out there are literal neo-Nazis being defended under the banner of Conservatism. Be angry at Quillette, not me. There are plenty more examples. You can side-step that all you want with vague accusations against "mainstream media".

And "one side calling everybody Nazis"? Really? Did you forget that Dinesh D'Souza wrote an entire book calling the left "Nazis"? That there were ACA protesters with Obama defaced as Hitler? Never mind shining examples of "adult talk" such as "Assume the Left Lies, and You Will Discover the Truth"[1].

[1]: https://www.nationalreview.com/2019/03/left-lies-trump-russi...


> In principle, who could be opposed?

I am opposed in principle.

White nationalism (or black nationalism, or yellow nationalism) should be able to speak on a social platform like Facebook.


“Joking” about murder or gun violence should be banned. To use satire to critic hunting when gun violence is a bigger issue is tone deaf.


That's really a perfect time to use satire.


If people weren’t shooting up schools and places of worship every other week... still no. It’s still callous.


Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: