Hacker News new | past | comments | ask | show | jobs | submit login
Substack faces user revolt over anti-censorship stance on neo-Nazis (theguardian.com)
32 points by markx2 4 months ago | hide | past | favorite | 52 comments



For those who support this, could you name another instance throughout history where people restricting speech of others ended up being "in the right", no matter the circumstance?


Yes: the guidelines of Hacker News promote civility, good faith, and logical argument, which drastically reduces the amount of trolling, flaming, and general vitriol.

Without moderation, forums quickly become unusable. If you disagreed, you'd be on 4chan, not Hacker News.

So I think you recognize the utility of at least some level of content moderation, which is a specific type of speech restriction.


But I'm on HN and on 4chan, for different reasons. While I find no joy in the tone on 4chan, I love how more open minded a lot of people there are on certain topics (that aren't related to politics).


But by going to 4chan, you are providing ad revenue allowing them to host pretty racist and hateful content. You may be there for the altruistic content, but you can’t separate it from the rest. Is the openness of the other content on 4chan worth that?


I don't use the Internet without strict ad blocking, so that's a no for me.


Ok, so you find that HN provides something that 4chan cannot, most likely as a result of its moderation, which suggests that limiting speech is, at times, useful and of value.

Fair?


Laws against slander, libel, perjury, doxxing, threats of violence and false advertising? Child porn? Planning a crime?

Society is replete with restrictions on speech that are considered uncontroversial even amongst most die-hard libertarians. It's weird that standing up against neo-nazis always seems to be a bridge too far.

And this isn't even censorship. No one is restricting the speech of the nazis on Substack. Rather, people are choosing not to associate themselves with a platform that welcomes nazis. They're voting with their money and their feet. Free speech doesn't guarantee you a platform nor does it guarantee you an audience. If it's to be a market of ideas, then it's encumbent upon the proponents of ideas to negotiate with the market.

You might reply that these efforts are putting pressure on Substack and attempting to influence Substack to change their policies. And so what? That's a perfectly valid use of free speech, isn't it?


Slander and libel are considered controversial amongst die hard libertarians. It's much harder to use those laws to suppress speech in the USA than, say, the UK. Which is one reason why the USA produced South Park and nowhere else can.

Do laws against "doxxing" even exist at all, or is that something you made up? Journalists "doxx" people they don't like all the time, apparently without consequence.

Laws against threats of violence are extremely controversial because the left constantly tries to redefine the term violence to encompass any views they don't like. In the USA the exact definition of "imminent" in respect to violence speech has consumed considerable attention from the courts.

Child porn is the least controversial, but even there, you will find controversy. Apple rolled back their plan to scan everyone's iPhones for child porn due to how controversial it is, right? And there is significant debate about what the actual legal age of minority is, because it varies around the world, and whether cartoons count or not.

False advertising: this is the least controversial form of speech control for libertarians because they want informed markets, but even here, it's controversial because governments suck at enforcing such laws neutrally. Consider all the news agencies that advertise themselves as neutral whilst being obviously biased.

Planning a crime: the border between people making idle threats and actually being legally liable is constantly being thrashed out by the courts even today, and usually convicting someone of this requires more than just speech. You have to actually take action towards carrying out the crime, not just saying you will.

In short, none of your examples are free of controversy. So the allegation you're insinuating there - that everyone who disagrees with you on this is a secret Nazi - does not work.

> No one is restricting the speech of the nazis on Substack.

In fact they are, because Substack does ban genuinely Nazi accounts. See the Andikon Reich example. This doesn't bother anyone because it's a very clear and precise match. The fuss is the usual problem where the left label anyone to their right banned under whatever rules are the most emotive, even when they don't apply at all.


Free speech in the U.S. is pretty uniquely libertarian compared with most places. So depending on your politics, the answer could be “every day”. Even in the US, there are restrictions on speech.

And the U.S. wasn’t always as unrestricted. Famously, people have been imprisoned for handing out antiwar pamphlets.


If you went into most restaurants or bars or similar spaces, and you started talking publicly about pro-Nazi stuff, you’d have the owner of the establishment come over and ask you to leave. I don’t see how anyone can think this is unusual or new or not expected.


In practice you will be able to talk about whatever you want in most restaurants or bars, because the owners aren't in the business of listening in on their customer's conversations and picking fights with them.


That’s why I said “publicly”, as in you’ve called attention to yourself in some way, enough for the staff to be aware and to see that the other patrons are also generally aware of you and what you’re saying.


If you go into a restaurant or bar and start yelling at strangers about pretty much anything you're going to be kicked out, regardless of topic. Exception: during a football match.


Sure, but without going too deep into hypotheticals, think of a group of neo-Nazis talking in the middle of a restaurant dining room or at the bar, sufficiently loud enough to be overheard by staff and patrons.


You would get kicked out for the same reasons as if you were a group of fanatical libertarian anarchists talking about politics sufficiently loudly to be overhead by staff and patrons.

You seem to be badly missing the point here. A forum for eating is not a place for broadcasting political views. A newsletter platform is. The two aren't comparable.


Restating your words:

> A newsletter platform is [a place for broadcasting political views]

I'm not saying that your assumption is wrong. I'm asking:

1. Why is it reasonable to make that assumption?

2. How much say does the platform owner have over the extent to which the platform is "a place for broadcasting political views"? Put another way, how much leeway should a general purpose newsletter platform have to ban certain topics/speech, including spam?


I think our understanding of the world is so far apart that it'll be hard to make progress here. It's like asking me "why is it reasonable to assume that the purpose of the printing press is for the dissemination of views". I don't even know where to begin answering that because I can't comprehend the worldview that would lead up to asking it.

Spam isn't newsletters, so that seems like a non sequitur. Newsletters are meant to go to people who signed up for them.

As for how much say does the owner of the "printing press" have, the answer is that in a civilized society they should have the final say. In less civilized societies than the USA the government thinks it's smarter than the populace, and should control their information. History shows us that governments which smash up the printing presses were rarely smart.


> I think our understanding of the world is so far apart that it'll be hard to make progress here. It's like asking me "why is it reasonable to assume that the purpose of the printing press is for the dissemination of views". I don't even know where to begin answering that because I can't comprehend the worldview that would lead up to asking it.

Sorry, I was just too vague with my first question. A newsletter can be used for broadcasting political views. But it doesn't have to be used for broadcasting all political views, and can be used to broadcast views that aren't directly related to politics. A newsletter can broadcast only views the owners of the service tolerate (not necessarily want) on the site. It's not reasonable to assume that Substack is a place for all political views, and Substack is fully in the right to ban certain ones without banning others. I bring this up due to my personal speculation: Substack decided not to remove articles openly advocating Nazi beliefs, but had Substack gone the other way then a loud subset of anti-censorship believers would paint the opposite decision - to remove such articles - as "giving in to the mob" (or less likely, giving in to nonexistent coercion from a government) rather than as an equally voluntary, valid, and democracy-compatible decision.

> Spam isn't newsletters, so that seems like a non sequitur.

You're right. Spam was a terrible example. A better example would've been articles advocating something extreme such as complete destruction of Ukraine. Or a topic on a different axis, such as advocacy of sexual gratification (which Substack does ban).

> Newsletters are meant to go to people who signed up for them.

The Substack moderation controversy is not about who gets newsletters. It's about authors on Substack who don't want to associate with certain beliefs posted by other authors. (In this context, spam was relevant if only tangentially; I wouldn't want to write a newsletter using a platform too notorious for user-generated spam newsletters. Nonetheless, I was wrong to use spam as an example.)

> In less civilized societies than the USA the government thinks it's smarter than the populace, and should control their information. History shows us that governments which smash up the printing presses were rarely smart.

Agreed. What bothers me about the Substack controversy is that people are emphasizing the government in the conversation even though the critics of Substack's moderation policy aren't trying to make the government do anything and aren't forcing Substack's hand. Just because the complainers are forceful in their language doesn't mean they are coercing Substack into agreeing with them (unless someone's been Machiavellian enough to doxx Substack employees or threaten the employees' safety; even then, guilt generally wouldn't apply to the entire mob). An online mob is part of the populace just as people who oppose the mob are. Non-governmental requests to moderate in a certain way are equally democratic as non-governmental requests not to moderate in that way as long as Substack gets to make the final decisions. Boycotting is not coercion and is democratic. Advocacy of boycotting are not coercion and are democratic. Criticism of boycotting is not coercion and is democratic. Doxxing is coercion. Threats to personal safety are coercion. The mob did not control Substack's decision here. This would be true even if Substack had decided to agree with the mob's moderation policies.


> Substack decided not to remove articles openly advocating Nazi beliefs

They actually did. Look at my other post on this thread that examines the Atlantic's cited examples. The only one that is actually clearly Nazi is suspended for ToS violations. The others are all nothing to do with Nazis, but it's a Tuesday so the left is claiming otherwise to try and censor their opponents.

> It's about authors on Substack who don't want to associate with certain beliefs posted by other authors.

And if they succeeded then it'd be "we don't want to be on the same internet as those other authors" and so on. Those people will never stop trying to shut down people who disagree with them and will certainly lie in order to get that outcome. Never trust censors!

> the critics of Substack's moderation policy aren't trying to make the government do anything

Remember that many governments outside of America ban websites for vague reasons like "hate speech". It's not just about the USA.

> Boycotting is not coercion and is democratic

They didn't want to do a boycott, they wanted their opponents to be denied the right to speech. They might end up trying a boycott now but it's not clear what it means to boycott a service like Substack, because they weren't receiving the material they were objecting to in the first place.


> the owners aren't in the business of listening in on their customer's conversations and picking fights with them.

Customers can complain about other customers to waiters and restaurant owners. That can happen in person, in reviews, or on customers' blog articles (you know, like the articles they post on Substack, if Substack lets them). "I overheard that other customer saying that waiters get paid too much." "4 stars. I'm reluctant to keep going to the restaurant though, because the waiters do nothing about loud customers." "That restaurant doesn't kick out people wearing Nazi symbols. Boycott it." A customer's suggestions can be unreasonable. The owner can ignore (un)reasonable suggestions. The waiter can ignore (un)reasonable suggestions and decide not to let the owner know about them. But kicking out a customer on the basis of customer complaints isn't necessarily less valid than kicking out a customer who hasn't attracted complaints from other customers or letting a customer who has attracted complaints stay.


It should be noted that for many, pro-Nazi might mean you are against immigration laws or for trump.


That’s not what we’re talking about here. We’re talking about newsletters that are explicitly pro-Nazi in nature.


One of the problems with labeling your political opponents Nazis is that no one believes you when actual Nazis show up.


Except that these are “actual Nazis” who have shown up. We’re talking about publications that, in some cases, use terms like “national socialist” and “reich” in their name and/or bio.


Curious that someone tries to sidestep an actual concrete example with some weird hypothetical to avoid actually confronting an actual issue at hand.

What is up with the lack of good faith from so many in this thread (not you, the parent)?

They don't want to be confronted with the breaking points of their ideology, but instead have to misdirect with some straw-man about another scenario.

And like a misdirected fool, I take the bait and comment on it.


Not so. Look at my post that goes through their examples, which HN being the bastion of well moderated tolerance that it is, is now at -1 and flagged. Only one meets your description and it's suspended for ToS violations. The ones that aren't suspended are just ordinary US politics that's being described as Nazis by the left.


> One of the problems with labeling your political opponents Nazis is that no one believes you when actual Nazis show up.

That's certainly true, but it sure is weird when a certain political leaning causes one to minimize the fact that actual Nazis showed up, which casts a hefty shadow on this perceived boundary between "actual nazi" and "not actual nazi". At what point does one stop imagining that enablement isn't a real thing? Shrug emoji, though, I guess.


In Singapore, LKY suppressed the communist party and their patsies to prevent a communist takeover of Singapore.

Puritans back in the old days were basically the Taliban, oppressing anyone who disagreed with their beliefs and way-of-life. They were pretty successful overall.

In Syria, Assad successfully crushed a revolt and is still in power by suppressing speech etc.

In the Arab world, the Arab Spring was successfully squished in multiple countries.

Communist China has been relatively successful by restricting speech.

"In the right" is probably the wrong metric, because it's meaningless. "Succeeded in its goals" is probably a better way of categorizing whether a specific tactic is successful or not. And success depends on the timeframe as well.


Consider these two patterns:

1. The owner of a website hosting user-generated content (such as social media posts) voluntarily bans users for speech the owner didn't like.

2. A government forces the owner of a website hosting user-generated content to ban users for speech the government didn't like.

Why and when is it reasonable to treat patterns 1 and 2 as sufficiently analogous to be counted under a single banner of "censorship"? Consider a third pattern as well.

3. A company offers tours of the buildings to outsiders who apply for tours. A tour visitor says something the owner doesn't like, so the owner voluntarily bans the visitor from continuing the tour.

Is it reasonable to call pattern 3 censorship? And if it is, is the so-called censorship in pattern 3 "in the wrong", "no matter the circumstances"?

Substack is a case of pattern 1. How should someone track which instances of Substack's moderation of XYZ ended up being "in the right" vs "in the wrong"? XYZ could be people who write in favor of Nazi beliefs (who Substack isn't banning). XYZ could be people who write in favor of sexual gratification (who Substack is banning).

As Ken White (Popehat) wrote about Substack's policy regarding Nazis [1]:

> My point is not that any of these policies is objectionable. But, like the old joke goes, we’ve established what Substack is, now we’re just haggling over the price.

...

> Substack has made a series of value judgments about which speech to permit and which speech not to permit. Substack would like you to believe that making judgments about content “for the sole purpose of sexual gratification,” or content promoting anorexia, is different than making judgment about Nazi content. In fact, that’s not a neutral, value-free choice. It’s a valued judgment by a platform that brands itself as not making valued judgments.

[1] https://popehat.substack.com/p/substack-has-a-nazi-opportuni...


Twitter removing the revenge porn of Hunter Biden comes to mind. But that's my opinion.

What about Kiwi Farms and Stormfront? I'm pretty sure that was right too.


I mean, seems pretty simple to me. They have an anti-hate speech provision in their content guidelines. They have groups that very explicitly violate it (and not in the sense of people randomly accusing others of being nazis, but of upfront and prominent nazis that display swastikas and loudly advertise being nazis). They're choosing not to enforce their own rules.

I would consider taking my audience elsewhere as well.


"Being a Nazi" or displaying a swastika isn't against their TOS. They have specific requirements that one has to breach to violate.


I'm of the mindset that it takes work to keep democracy, freedom and egalitarianism alive. You can't just let anyone publish anything and think that it will all work out in the end.

It is tough because separating freedom of expression from hate speech is hard and prone to errors, but the alternate is to let Nazis and other hate groups to grow and strengthen. History has shown that when evil is given a platform, it grows instead of withering away in the light.


Perhaps the reasons why those groups grow so easily should be addressed instead of treating it like it is an inevitable force of nature that can have a dam placed in the right spot to keep it eternally contained.


No one is treating anything like an inevitable force of nature, nor is anyone claiming that it can be eternally contained.

But it should be obvious given the last decade or so, if you've been living above a rock, that the internet and social media provide a force multiplier for speech that, due to the priorities and incentives of algorithms, prioritizes speech many might consider harmful and dangerous. One can no longer naively accept truisms such as "sunlight is the best disinfectant" and "the only answer to bad speech is good speech" when history has shown that the playing field is not level, and that despite being exposed to the light of day, running riot across the internet and being debated furously on all fronts, these groups only feed upon, harness and grow amidst the controversy and chaos. However, limiting the scope and velocity by which they can spread their message and recruit has proven effective in slowing their influence, if not stopping them altogether.

You're also presenting a false dichotomy here. We don't have to choose between restricting hate speech and fighting hate groups on other fronts, we can do both. However, just as one does not deprogram a victim from within a cult, one cannot effectively address the root cause of the spread of racism and hate in an environment where that message spreads unabated.


krapp here sounds exactly like a Catholic Pope in the 16th century.

"Those who do not remember the past are condemned to repeat it," indeed.


I haven't read a lot of primary sources from the period, but "the internet and social media provide a force multiplier for speech that, due to the priorities and incentives of algorithms, prioritizes speech many might consider harmful and dangerous" and "limiting the scope and velocity by which they can spread their message and recruit has proven effective in slowing their influence" aren't things I'd expect to hear from a catholic pope in any century.


In cultural terms, these six decades were marked by the rise and rapid development of the censorship policy of the Catholic Church, directed mainly against printed books, as part of its struggle with the Reformation and with those aspects of Renaissance culture which it came to regard as immoral. Many well-documented studies have shed light on ecclesiastical censorship and on the various Indexes of Forbidden Books. Printed books soon came to be perceived as a dangerous channel through which Protestantism was able to enter the minds of readers and influence their thought.

https://core.ac.uk/download/pdf/33337967.pdf


Yeah, but the Pope was right!!!


There have always been racist groups in the US. Usually they become more popular after Congress gives minorities more rights. For example after reconstruction, it led to the rise of the KKK, which again experienced a resurgence in the 60s and 70s after the Civil Rights Act was passed.



Why do so many people post this without reading it?

> I do not imply, for instance, that we should always suppress the utterance of intolerant philosophies; as long as we can counter them by rational argument and keep them in check by public opinion, suppression would certainly be most unwise.


Every time I see "The Paradox of Tolerance" referenced in these kind of discussions, it continues to reinforce my view that it's pure sophistry. There is no paradox, it's simply a convenient tool for justifying censorship, which is ironically what it pretends to protect us from.

Ideals require integrity to function, not unlike how cheating during a diet won't get you anywhere. If you think the ideals of liberal democracy aren't strong enough to weather a few naysayers in the public forum then you probably never believed in them in the first place.


If you want to defend giving nazis a platform, be my guest.


The paradox of tolerance is an issue to consider, but not necessarily a situation to avoid. Regardless of whether and when it's reasonable to call certain policies "tolerant of X" or "intolerant of X" or "intolerant of anti-X", the policies are value judgements, just as critical comments toward the policies are value judgements.


> At least 16 of the newsletters that I reviewed have overt Nazi symbols, including the swastika and the sonnenrad, in their logos or in prominent graphics.

> Andkon’s Reich Press, for example, calls itself “a National Socialist newsletter”; its logo shows Nazi banners on Berlin’s Brandenburg Gate, and one recent post features a racist caricature of a Chinese person. A Substack called White-Papers, bearing the tagline “Your pro-White policy destination,” is one of several that openly promote the “Great Replacement” conspiracy theory that inspired deadly mass shootings at a Pittsburgh, Pennsylvania, synagogue; two Christchurch, New Zealand, mosques; an El Paso, Texas, Walmart; and a Buffalo, New York, supermarket.

> Other newsletters make prominent references to the “Jewish Question.” Several are run by nationally prominent white nationalists; at least four are run by organizers of the 2017 “Unite the Right” rally in Charlottesville, Virginia—including the rally’s most notorious organizer, Richard Spencer.

> Some Substack newsletters by Nazis and white nationalists have thousands or tens of thousands of subscribers, making the platform a new and valuable tool for creating mailing lists for the far right. And many accept paid subscriptions through Substack, seemingly flouting terms of service that ban attempts to “publish content or fund initiatives that incite violence based on protected classes.” Several, including Spencer’s, sport official Substack “bestseller” badges, indicating that they have at a minimum hundreds of paying subscribers. A subscription to the newsletter that Spencer edits and writes for costs $9 a month or $90 a year, which suggests that he and his co-writers are grossing at least $9,000 a year and potentially many times that. Substack, which takes a 10 percent cut of subscription revenue, makes money when readers pay for Nazi newsletters.

https://www.theatlantic.com/ideas/archive/2023/11/substack-e...


This is a PR piece for Casey Newton‘s substack.


So .. is it 1 in 10 posts that are allegedly "neo-Nazi" content? 1 in 10,000? 1 in 10,000,000? What's the threshold where if X% of posts on any given platform are alleged to be wrong people revolt and cancel that service? Is that threshold higher or lower than what a motivated group that wanted to destroy the service could produce?


This is a bit like asking what the threshold amount of mouse droppings is in your soup before you would stop eating it.


This is actually a number that the FDA/USDA regulates.


They might regulate the number that's okay for the person serving me, but they definitely don't regulate the amount I'm personally okay with. I'm a little curious which is lower. But only a little.


I hope so, but I would personally prefer less.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: