Hacker News new | past | comments | ask | show | jobs | submit login
Microsoft threatens to stop hosting Gab unless posts are removed (thehill.com)
266 points by anigbrowl on Aug 9, 2018 | hide | past | favorite | 524 comments



A common argument in philosophy is that bad ideas should occur in the open - where everyone may ridicule them. When they are cast to the shadows they can grow, under everyone's noses, in private where only there are they allowed to exist. In private, these bad ideas cannot be challenged by others and people will be convinced to believe in them, with nobody challenging the idea as a genuinely terrible idea.

It gives bad ideas a breeding ground to foster and you'd never even know about it. By publicly prohibiting speech all that happens is it is brushed under a rug and people pretend it doesn't exist. That doesn't make the problem go away, it's the same as shutting your eyes and covering your ears and pretending the monster in front of you no longer exists. If you get to define certain speech as unspeakable you justify the censorship of any speech determined to be unspeakable when you lose power and someone else steps up to the plate. As history has shown - it's a matter of "when" not "if".

Actions like this serve to placate the public. The general public won't be angry at Microsoft for censoring platforms "known for" or "complacent of" hate speech (or worse). It's always a slippery slope though. Once you open up the can of worms that is censorship you justify future censorship.

I wait to see what will be next on the chopping block.

Edit: A few too many responses to respond to everyone, but if anyone would like to speak with me on it more in depth feel free to email me - information on how to contact me can be found on my profile.


That argument works well in relatively small social groups where people generally can be relied upon to act in good faith, and where there are social disincentives to clinging to bad ideas after they have been refuted.

When you have a small group with social bonds, you don't have to worry about a bad actor constantly repeating their already-refuted arguments to people who haven't heard the refutation.

But when you have the whole internet as your audience, no matter how many times you are refuted, you can say it again and a whole group of people will be hearing it for the first time.

I have been on the Internet since the 80s. What I have personally found is that a lot of its social norms were lifted from academia, and they work very well as defaults in a new social media context. Like Hacker News when it was first launched.

But eventually, every "social medium" on the Internet has its "Eternal September," when the number of new folks in any conversation outweighs the old hands. At that point, the social norms that worked for the small, cohesive group no longer work.

And worse, there are specialist parasites who exploit people's reluctance to change their social norms, and the act in outrageously bad faith.

Sites like HN have survived by changing. For all of the popularity on HN of "unrestricted free speech," HN is actually moderated, and that's why it works.


People are better off hearing a bad idea and hearing it refuted then never hearing a bad idea. You could say, "well then detractors of the idea will have to keep refuting it" and you'd be right. That's how public forums work, and I hope you don't take everything your parent's believed for granted just because their detractors have already been "refuted".


It's too easy to come up with, spam and spread bad ideas, but actual work to refute them. As long as people are not paid to refute bad actors with an agenda, bad views will eventually stand unchallenged and make life worse for everyone for the profit an amusement of a view. As said refuting partially worked in small communities, but not on a global audience scale where the is no effective way of social sanctions, that would happen in smaller knit ones.

So, in the light of this spreading provenly bad ideas shouldn't be made as cheap of a process that it is now.


Yes.

Also, you can refute all you want, but if the bad ideas are better clickbait than your refutations, they will spread faster.


A lie is half-way around the world before the truth has its pants on. — Mark Twain

Younger me was a free speech ultimatist. Alas, human nature.

Propaganda works, so well that people aren’t even aware of changing their minds. Overton windows. Blowback, where refuting further cements the falsehood. Belief as attire.

Etc, etc.


And, as has been clearly illustrated, "trusted" news source can't always be trusted either. So preventing "lies" (lie is almost always a matter of perspective) to spread ... is really always also preventing people from finding out the truth.

We just used to find that acceptable. Wars have been started by fake news from "reputable" sources. If fake news regulations become widespread and usable, they would be trivially easy for the rich and the government to abuse for censorship.

Can you imagine how this would have gone had the president had the ability to prevent "fake news" from spreading ?

https://globalnews.ca/news/4209011/trump-immigrant-animal-fa...

(I would argue that in Europe this "fake news" censorship is in fact pretty much normal. Only ... a part of the fake news isn't fake at all, but rather embarrassing to the government. The situation with rioting in French cities is 100x worse than sites like "Le Monde" report (which ones? Let's just say any > 1 mil ppl and there's only a few you'd be wrong about. Nice is pretty damn bad, for instance. Destruction in the city center every friday. You should see the security measures shops are taking, wtf). And other things that the government doesn't like get extremely downplayed as well : France was pretty much shut down due to a student strike in Paris for Macron's labor laws 3 times, and twice for his agricultural policy (both of which are downright abusive, and I'd say the protests were very justified) ... and there was a small mention of a protest march in Le Monde. Mass car torchings are a weekly occurrence since 2005 or so where I live. It baffles me, but it really looks like they're literally doing this just to save face for that asshole Macron. I get it ... I get it. He "saved Europe", especially after Brexit. He also fucked up France, and that matters more to me, and should matter more to French media and French voters. It's absurd because there literally isn't a single person in France that doesn't know these events happen, so what's the point of keeping them out of newspapers ? Do they think people will just forget ?)


> lie is almost always a matter of perspective

This meme I do not understand. There is one physical reality[0]. A thing either happened, or it didn't. Physics doesn't change its workings because of perspective. A precise enough statement about reality can be either true or false, there's no middle. There's no "depends on perspective". If I say, "the Sun is shining", the truth of that fact does not depend on perspective.

Of course, very often we're dealing with complex statements that deal with many aspects of reality simultaneously. We often talk about indirect evidence. But that doesn't suddenly open a wormhole to a post-truth dimension. We have tools and frameworks to deal with that. We can say, there is strong evidence that this occurred. Or, there's strong evidence that it didn't. Or, the current evidence points in neither way.

Arguing that truth and lies are a matter of perspective is just trying to deliberately confuse people. After all, if we really accept this view, then reality doesn't exist, nothing makes sense, and we can all go back to the caves we came from.

--

[0] - simulation ideas and other things aside, though they are all conveniently defined in a way indistinguishable from us all being separate minds inhabiting one reality.


I do agree that there's one objective physical reality that's accessible to us. There is quantum level stuff, where indeterminacy is an issue. But at the macroscopic level, that's pretty much averaged out. Except maybe entanglement.

But anyway, as you say, complex statements typically involve multiple facts, assumptions, inferences, conclusions and speculation. And yes, we have language for that.

The problem, though, is that people often express complex statements in simple language. They gloss over complexity, and hide assumptions, inferences, speculation, etc. So it's easy to have multiple stories claiming to be true. And it takes considerable work to decompose them, and determine which best fits the facts.


> So it's easy to have multiple stories claiming to be true. And it takes considerable work to decompose them, and determine which best fits the facts.

That's very true. My point is that we need to do that work - and encourage people to do that work, and teach them how to do it - instead of just saying that "lie is a matter of perspective" and giving up.


True. But damn, I didn't get serious training in critical reading until grad school. And really, not until I did some litigation consulting ;)


The idea that objective truth about reality not only exists but is accessible to the sufficiently enlightened, at least if they use the right "tools and frameworks", is a bold philosophical thesis that has undergone quite a lot of criticism in the past few thousand years. The idea that advocates of 'perspectivism' are simply trying to confuse people is absurd. "There are not facts, only interpretations" people are just as passionate about trying to make themselves understood as anyone else.

Your comment is exactly why it is so important that we continue to value free speech. Many people are utterly convinced that their views of reality represent absolute and objective truth, totally unaware of the philosophical assumptions underpinning their woldview. History is full of people convinced their view of reality represented absolute and objective truth. They seldom agreed.

Giving such people the tools to censor their opponents is very dangerous. First, the set of people who are wielding those tools will inevitably change. Secondly, well, where they burn books, they will someday burn people.


At no point I was trying to argue against free speech. Just about the notion that "lie is almost always a matter of perspective". I'm aware there has been criticism of the concept of objective reality in philosophy, but I'm also not aware of any framework of thought that would fit the observable reality better. In fact, all the progress of science and technology, as well as all communications we do with each other, are based on assumption that there is an objective reality which we can observe, and about which we can exchange information that lead us to create a consistent, shared view.


I have no idea how to articulate these notions, please forgive.

#1

While I think of myself as a Popperian, I've begun to adopt a "true enough to act now" view of the world. It's close to the notion "a good plan violently executed today is better than a perfect plan tomorrow".

Mostly, I'm sick of the debate. I have relatives and friends who are creationists, climate change deniers, supply-siders, etc. Instead of trying to convince (persuade) people about the truth, opposing the torrent of bullshit forces pushing these idealogical rocks up hill, I now focus on "ok, what can we do today?"

In the case of evolution (vs creationism). The value of the theory isn't that it's true (or not). Rather, the value is a set of axioms that allows us to make reasonably good predictions about the world.

So when it comes to the complicated topics (eg current events), I'm trying to relax my standards for objective truth, be satisfied with "true enough, for now".

#2

We need more data. (aka You can't manage what you can't measure.)

I was a tree hugger when GIS and satellite imagery were being rolled out. They completely changed the conversation about clearcutting, habitat loss, impacts, etc. It was no longer a just propaganda war. Legislators could see for themselves, if they wanted to, what was happening.

#3

Stop fighting "fake news". There's a better way.

Verifiable attribution, sources. All source data needs to be published.

Every publisher (that wants to be verifiable) would have their own blockchain(s), sign their own works. Whenever a reader sees a news item, they could verify the authenticity by checking the block chain.

This is no different than adding SHAs to software releases.

Each blockchain would be seeded with a CA, leveraging our current chain of trust infrastructure.

#4

Also, the story is made in the edit. So publishers need to show their work. Just like in science. Sure, cut down a two-hour interview to just 5 minutes. But you still have to post the original footage.

#5

No one challenged my abandonment of "free speech ultimatism." So what's the alternative?

Recognizing that cognition is social, that we (mostly) cohabitant a shared reality:

Defending public discourse from the abuses of free speech is a tragedy of the commons [https://en.wikipedia.org/wiki/Tragedy_of_the_commons]. This framing goes further than Paradox of Tolerance [https://en.wikipedia.org/wiki/Paradox_of_tolerance] by justifying the limitation of some speech in service of the greater good.

[Maybe call it the "marketplace of ideas" to engage the Freedom Markets™ zealots.]

I have no idea how to police the commons.

But I do think punishing parties for knowingly lying would be a good start.

There was a time when FCC broadcast licenses where contingent on serving the public good. And license renewals were serious business.

I'd be okay with doing the same for online mediums as well.

There would be two classes of speech: anonymous and verified (per #3 above). Verified speech, and the aggregators, would vouch for what they say.

If someone is found guilty of using verified speech to harm the public good (jury of their peers, public hearings), their root CA would get revoked.

They can still publish anonymously, of course. But they can no longer claim to be telling the truth.

---

Okay. Thanks for listening, humoring me. I have these notions rattling around in my head. And I like your comment history. And wanted to share with someone(s), in case I get hit by a bus.


Upon reflection, that is truly a great analysis! I just read Larry Sanger's Everipedia proposal, and believe that the concept of trust revocation would be a useful addition.

Would you be interested in fleshing it out as an Everipedia page? If you're not editing Everipedia yet, I can help you with that. Anonymously,if you like. Or I could write the page, with your permission and perhaps input.


Thank you for the Everipedia tip. Hmmm, very interesting. Sending you a PM via email.


That's a great analysis. And I like the recommendation. Liars with verified true speech get their CAs revoked. Love it.


Interesting. Would someone who applies that philosophy be willing to say flat-earthers are wrong?


I now believe this is a question better posed to psychiatrists.


That is what I call 'thermodynamic truth', or the truth we would see if we ran time backwards. The problem is, we can't run time backwards and because of information loss there, many times, can be many possible situations that create the outcome we see.


> The problem is, we can't run time backwards and because of information loss there, many times, can be many possible situations that create the outcome we see.

We can still quantify that fact and include it in our reasoning.


Also, the Bullshit Asymmetry Principle, or Brandolini’s Law: The amount of energy needed to refute bullshit is an order of magnitude bigger than to produce it.


Thank you! Added to my ever growing list.

This also resonates with me

"It is impossible for someone to lie unless he thinks he knows the truth. Producing bullshit requires no such conviction. A person who lies is thereby responding to the truth, and he is to that extent respectful of it. When an honest man speaks, he says only what he believes to be true; and for the liar, it is correspondingly indispensable that he considers his statements to be false. For the bullshitter, however, all these bets are off: he is neither on the side of the true nor on the side of the false. His eye is not on the facts at all, as the eyes of the honest man and of the liar are, except insofar as they may be pertinent to his interest in getting away with what he says. He does not care whether the things he says describe reality correctly. He just picks them out, or makes them up, to suit his purpose."

https://en.wikipedia.org/wiki/Bullshit#Harry_Frankfurt's_con...

Add "profit motive" and it perfectly explains con artists like Alex Jones, Rush Limbaugh, Ben Carson, too many to list.

Like P. T. Barnum before them, the primary intent is to fleece their audience. Suck them in with the outrage, then empty their wallets.

Rinse, lather, repeat.


I agree, but I also think there’s some effect of censorship that drives people toward the censored idea. Part of it is a rebellion against authority (“who are YOU to tell ME what ideas are fit or unfit to consume?”); I think there’s some related effect that is driving Trump’s popularity—some people are rebelling against the cultural authority of the left (by which I mean major news publications, entertainment media, university faculty, social media companies, etc—powerful cultural institutions).

Another problem is what happens when bad actors come into power and get to do the censoring? We’ve just handed them a nice precedent, have we not?


> there’s some effect of censorship that drives people toward the censored idea.

Not successful censorship per-se, but attempted censorship. See the "Streisand Effect": https://en.wikipedia.org/wiki/Streisand_effect


That's why refutations also need to be good clickbait.

But of course that requires good communicators, while many people with in-depth knowledge are not even mediocre ones.


For refutations to be both true and better clickbait requires not merely good communicators, but excellent or even world-class communicators, to adequately satisfy both sets of constraints.

Meanwhile, bullshitters have a much easier job.

eg. the creationist argument "If evolution is true, why are there still monkeys?" doesn't even make sense, it just sounds good, confirms existing bias, conveys the (false) feeling of understanding, etc. As a result, it is unfortunately very persuasive, and continues to spread much faster than the refutations, which aren't as simple and don't give the same superficial "aha" jolt of satisfaction.

In fact, the closest I've come to a refutation that works as well is "well, if selective breeding of dogs is true, why do we still have wolves?", but of course then you still have to follow up with one or more examples of natural selection mechanisms & the idea of non-uniform environments (more small and fast prey over here, dry and cold weather over there, etc.), or you're inadvertently confirming their bias by introducing an intelligent selecting agency as part of your argument.


Maybe there's a reason a "bad idea" is clickbait.


Can't think of a better explanation than this video by CGP Grey, titled "This Video Will Make You Angry", [7:26].

https://www.youtube.com/watch?v=rE3j_RHkqJc


That's almost a truism, no?


Indeed.


Hatred simply feels good. Unless you are saying that is something defensible or praiseworthy, then I am not sure where you are going with this thread of thought.


Who gets to determine what is "hatred"? The natural upvote/downvote economy of the internet or a self-appointed anonymous class of bigtech mass-censors? When you go on the internet would you prefer to see reality or an artificial selection of on-canon content pre-approved by the priest-media class as in the age of television? In many ways it doesn't matter because infotech decentralization will slowly find a way, and forcible censorship will no longer be possible.


Whoa there buddy, there's other options for moderation besides media-priests || free market.

How about we just let the people who own the machines decide what data sits on their machines?

So I agree, decentralization will allow some communities to be total free market reddits and chans where anything goes, and I expect them to be very unpleasant places to hang out for some and not others.

I'll be over here in my private community where Microsoft can't read my discussions and anyone whining about the priest-media will be booted.


With more decentralization comes more freedom of association; I will choose to be a part of communities that I judge to have a reasonable view of what is "hatred", and those communities will, just like they do today, kick out offenders.

Decentralization will give people the opportunity to spread their hatred among themselves, without involving me, which is... exactly how things operate today.

So... Bring on infotech decentralization, I guess. It's got its merits, and for me, nothing about all this hate speech will change.


I remember national TV talking about UFOs, proofs of WMD in Irak, Homosexuality being a disease, conspiracy magazines in every gas station shop, and no easy way to verify anything.

I have the impression that the general quality of available information has only gone up.


Promoting those things back than had significant opportunity costs for getting things published and reaching an audience. And bad actors even got denied free or even paid communication channels after a number of bad faith offenses, so gatekeepers to mass communication were a good thing in that respect. Of course they also work the other way around, if a bad idea manages to creep in, but it took a significant amount of coordinated resources in analog times.

I think it's no accident, that the rise of fascism and totalitarian Stalinism is coincident with the rise of radio and the maturing fields of advertising and mass psychology. This is the first wave it got a magnitude cheaper, to reach big audiences. We're witnessing the second one. (Third one if you count the printing press, but the adoption happened over generations, so societies had time to adopt)

Nowadays intentionally bad ideas work like spam, they have only an very small individual chance to get hold of a person, but with an near infinite number of permutations for free, it's way more likely to hit each individuals psychological weak spot eventually.

So it was never easier to spread ideas for individuals, but so it for all sorts of corporate and even nation actors posing as individuals.


> (Third one if you count the printing press, but the adoption happened over generations, so societies had time to adopt)

Even though the printing press kicked off societal changes that took a few centuries to mature and settle down, it was only slow by our current standards. The societies and institutions that were being disrupted evolved in an environment that changed on millenial timescales, so they were, in some senses, shell shocked for most of those 'few centuries' due to a failure to adapt despite having ample (by our standards) time, and the new institutions that arose were regarded as disruptive upstarts for generations (eg. the Lutheran church).

Anyway. An order-of-magnitude change is going to be just as disruptive, even if the change is from "snail's pace" to "tortoise-like".


> It's too easy to come up with, spam and spread bad ideas, but actual work to refute them.

That makes sense for shared forums. But not in the case of someone like Alex Jones, where he has his own media platform on his own site and various channels, and not in the case of Gab where they're hosting their own forum.


People keep making this technical argument

Technically Sodium reacts with water, so if you keep them apart it should be fine.

Does not translate to

Carry sodium in your pocket, as long as your pocket is dry.

------

In non analogy terms - this is not how forums work anymore, at all.

This was maybe how they worked back in the bbs days or slashdot era.

Today - theyre just places to generate noise and saturate human attention.

The battle has moved to another level entirely, and people who bring up incorrect information are literally predictable props, used to confuse the audience.

Trolls KNOW how a comment is going to be refuted, and form statements which will either

1) dodge the filter,

2) be not vile, but will smuggle/trojan in arguments which are vile

3) overwhelm the ability of people to defend

And if all of the above fail, they will harass or use other methods to "win".

The objective is screen real estate, to be the dominant thought.


I don't have a source for you but I'm pretty sure I've read studies in the past that actually directly contradict what you're saying, studies where people were exposed to refutations of incorrect ideas, and later actually remembered the bad ideas but forgot that they were refuted.

Which is to say, if you already know about the bad idea, having it refuted is presumably a good thing, but if you don't know about the bad idea in the first place, learning about it in order to have it refuted may actually backfire.


There are an infinite number of bad ideas in the world. No-one’s life would be improved by hearing them all.


In Saudi Arabia letting women drive was/is deemed a bad idea. Some cultures think it is a bad idea to not cut off the clit of girls. There are Islamic scholars that will go at length on how you should discipline physically your wife, for her own good. In those places/societies opinions against all of it would be considered wrong ideas. If you stiffen speech, you don't just stop the society from regressing, you also rob it from progress.


Yes, proposing that a group have more rights that they don't currently enjoy is just like saying a group should be suppressed and have their rights curtailed.


It is. The only point where "rights" matter is when two groups of people are in conflict, and each side will always be able to express what they want as a "right" (e.g. making it illegal for other people to describe you in their own terms is your "right to exist").


That doesn't follow. If group A enjoys a right and group B does not, and that right is extended to group B, Group A no longer enjoys the same advantage over Group B but hasn't suffered any limitation of rights. To use the supplied example, if you're a male driver in Saudi Arabia your driving privileges are unimpaired by the novel legality of female drivers.


> if you're a male driver in Saudi Arabia your driving privileges are unimpaired by the novel legality of female drivers.

Your right to drive is unimpaired, but the privilege of driving that you enjoy most definitely is.

Most movements to expand rights to new groups are opposed by those who would thereby experience a diminution of their privelege (which is always relative).

This is also why you have the wealthy so often opposing policies that will create more wealth for everyone including themselves, because those same policies reduce their relative advantages.


Group A will always find some way to define the change as them losing a "right". E.g. in the US if you propose to extend the right to life that born people currently enjoy to the unborn, many born people will claim this is a violation of their rights.


People don’t get more rights than others. All individuals get the same unalienable rights, and that is a universal truth which is self-evident.


How is this "universal truth self-evident"?

I'd love you to show me that there is a repeatable and measurable experiment that determines that as a fundamental function of the universe, people should have equal rights.

Personally, I don't believe that (the fact it's a universal truth) to be true, morals and ethics are a subjective topic and stuff like "everyone should have equal right" is something that most people including me can agree on fairly easily. Other people might not agree.


The founders held those truths to be self-evident, but in some contexts (say , outside of educated 18th century gentry) they must be argued.


This doesn't work without a universal moral law, an assumption now deprecated in many circles.


I disagree, but in any case you seem to be disregarding the context of my comment above, without which it is empty of meaning.


Maybe not. But is one’s life diminished by hearing bad ideas? I don’t think so. For instance, the idea that the Earth is flat is laughable. But my life doesn’t deteriorate nor improve based on that idea. We can all agree it’s a stupid idea, laugh at it, and move on.


Not all bad ideas are so easily brushed off as being "totally laughable". For example a common and particularly insidious one that gets a lot of "logical" people who think of themselves as good critical thinkers is tying IQ to genetics. This leads people down the path of [IQ is largely determined by genetics] => [X impoverished group has a lower IQ] => [Because of their IQ they are impoverished] => [White people are superior].

Looking at that line of thinking it's ridiculous, there are assumptions and a gross misunderstanding of correlation and causation and confounding factors. But if you are not educated enough to see the problems in those arguments (which I think would be a higher level of education required than believing in flat earth) or if you are a little bit inclined to let someone walk you down a "logical" path of fallacies then it's easy to make that leap. And that's an easy argument for a white nationalist to make. It's a lot less easy for someone to refute it.


    > But is one’s life diminished by hearing bad 
    > ideas? I don’t think so
You may think that it's harmless to thousands of racist theories floating around the internet. I don't. I think it plants seeds of doubt. "Group X must have done something to make so many people hate them" etc.

Let's say I publish an article alleging you murdered your previous girlfriend/boyfriend. It goes viral. A month later, the truth airs! Someone publishes a follow-up explaining that you are innocent. Do you think your life will just go back to normal?


Depends on the bad idea.

If the bad idea was that you and your family should be killed or forcibly expelled from your jobs and homes, and you heard it discussed seriously and frequently by your neighbors, police officers, and elected officials--yeah, I think that might diminish your quality of life in some ways.


You don't even have to go that far.

Look at anti-vaxxers. A small group of hysterical "do-gooders" educated by bad ideas are doing a lot of harm and if it's up to them would reverse the arguably greatest achievement in medical history.

Kids actually get killed thanks to the efforts of those assholes.


That is clearly untrue. History is just full of bad ideas that has caused the death of untold numbers of people. The problem is that we can’t all agree and people do not move on.


Bad ideas certainly can diminish quality of life. Firstly because we can't all agree that they are stupid and move on - the flat earth trolls are actually gaining momentum, and we've been re-litigating creationism for over a hundred years. This re-arguing ties up an enormous amount of energy that we could spend far more productively.

And secondly, when the masses weary of arguing nonsense ideas, there is a danger of them becoming acceptable. If we all stop arguing when the trolls repeat yet again that jews/immigrants/women/gays are sub-human and should be removed, some people will take that silence as assent and the violence begins.

So what we need is some kind of snopes for ideas, where our brightest and best could weigh good and bad ideas, and the rest of us could just point to their results. But wait - that's what a university* is! And it's no coincidence that the trolls direct their fiercest fury at academics.

*Trump University excepted.


Is there any evidence that there are more people who believe crazy things than before? It seems way more likely to me that there is a portion of the population who are nuts and basically cycle through various conspiracy theories.


Depends on how much your time is worth either refuting, or investigating, any claim which may or may not be bad.


I think the problem is that small groups no longer exist. The internet is a vast and deep public space just line our planet. You could never hope to explore every part of the internet just as you'd never expect to explore every inch of our planet. So just as it's possible for there to be vast parts of our earth that millions of people will see except for you, it's also possible for there to be vast public forums on the internet that rational thought won't ever reach.

* Typo


> People are better off hearing a bad idea and hearing it refuted then never hearing a bad idea.

We humans have a strong preference for ideas that support our existing beliefs. This feature, when combined with the proliferation of sources of news and other information (who directly profit from this feature), means that the ones who are most in need of hearing the refutation of a bad idea are also the least likely to ever hear it.


A certain percentage of people ignore the refutation and end up believing the bad idea. For certain bad ideas (racism, anti-vax etc) this can result in people getting killed.


I totally agree in principle but social media is a bubble, people don't befriend people with opposing views on Facebook, if someone posts something they disagree with they'll make a passive aggressive post and unfriend the person, further insulating them from ideas they don't like.

This is one reason every issue has become so polarized these days, people don't hear the detractors they just get their own views amplified back to them.


So tell me how the "refuting" of the idea that Sandy Hook was a hoax caused the harassment of parents of children murdered there to stop?


The people harassing Sandy Hook victims should be arrested for stalking and assault or sued for libel and slander. Letting them talk about it on the Internet just helps the idiots incriminate themselves.


And causes the parents to have to move several times, to the point where they can't even visit their child's grave anymore.


What if your parent is one of the Sandy Hook parents victimized by Alex Jones? And the thing they believe is that your sibling existed?

I mean, at some point you realize that people have figured out repetition is the point and there can't be any refutation if they just do not acknowledge it.

Your outlook recalls the way the internet was 20 years ago, when, for instance, holocaust deniers had their little group on Usenet, and idealists could spend eternity providing evidence they were wrong. It seems so quaint now.


It's not clear that this is true, though. Psychologically, your average human is likely to assign equal weight to the bad idea and its refutation, which is exactly what you don't want. Moreover, this assumes that people hear both the bad idea and its refutation, as opposed to one or the other from inside whatever cognitive echo chamber they happen to inhabit.

In some cases (climate change denial, flat-earthers, geocentrism, Holocaust denial, etc.), the refutation may also presume a higher level of scientific, historical, or other general knowledge than the bad idea. Bad ideas that play upon our cognitive / perceptual biases have lasted for millenia, and they are very hard to dislodge from the public mind. To some extent, these are all bad ideas that people have to be educated and/or socialized out of.

IMHO, this is one of those precepts that most people want to be true. Unfortunately, we're not yet in that perfectly educated rational utopia :s


>People are better off hearing a bad idea and hearing it refuted then never hearing a bad idea.

That seems like a bad idea. Can you prove that given historical data and current data? Show me where the spread of bad ideas on social networks has been a net positive in large populations. Show me where the refutation of bad ideas has been effective in those populations. Provide a time scale as well.


As much as I hate censorship, There always a bunch of idiots who believes the bad ideas. Those neo-nazis are the least evil compared to the alternative medicines.

Even if you don't believe your parents, you still got no vaccine, and other crazy medical treatments even the comic books can't imagine.


Isn't this very similar to what Tipper Gore's group tried in the 80s to censor rap and "hard rock" artists ?

"Oh! We need to block this harmful speech. Think of the children! "


If a record store chooses to carry less than every album ever made, does that mean it's censoring the ones it doesn't carry?

Because the subject of this thread is Microsoft deciding what they want host, not the government telling Microsoft what they are permitted to host.


There is a subtle but important difference between blocking speech because children encountering it will supposedly be harmed, and blocking speech because the adults that encounter it will then be more likely to harm children.

A similar difference exists between preventing children from gambling, and preventing adults from gambling away the money required to feed their children.

Even though the harm in the latter examples is actually easier to substantiate, curtailing spreech because of the harm adults might do (even to children) is rather difficult, so demagogues tend to focus on the former for their bombast.


You realize that your argument is an argument against democracy, right?

If mere exposure to bad ideas is this dangerous then mass scale democracy is not viable. The viability of democracy beyond very small groups is dependent on the idea that human beings are capable of independent rational thought and of discussing and evaluating ideas on merit. If that's not the case then the neo-reactionaries are right and democracy is doomed to failure or worse.

What I see here is a major freakout over Trump. IMHO feeling alarmed about Trump is justified, but throwing basic freedoms and the idea of the open flow of information out the window is not the right response. The right response is to ask how and why Trump won and address the problems, ideas, and arguments that led to it rationally. The solution is also to learn how to use this new medium better. If Jones' BS is more "viral," then more rational thinkers need to learn how to play the viral game. You must learn to communicate effectively using the media of the time. If you use social media you must learn to be viral just like previous generations had to study the art of the persuasive essay or how to do makeup properly so you don't look like a sweaty pig on a TV interview (see Nixon vs. Kennedy TV debates).

If it really is true that bad ideas are inherently more viral than good ones and that cannot be combatted, I do not see how democracy can survive at all in the Information Age.


>The right response is to ask how and why Trump won and address the problems, ideas, and arguments that led to it rationally.

Absolutely.

> If Jones' BS is more "viral," then more rational thinkers need to learn how to play the viral game.

There's an inherent danger here. Viral media is in many ways a short-attention span, high time preference sort of game. It's probably not conducive to a high level of discourse. It is IMO a race to the bottom, much the same way fast-food restaurants compete. Eventually, you need to open up a Whole Foods or two.


> But when you have the whole internet as your audience, no matter how many times you are refuted, you can say it again and a whole group of people will be hearing it for the first time.

Why is that a problem? You can't force people to believe in reality. Might as well have this process happen in the open to actually provide a disincentive to believing conspiracy theories/propaganda/ads/fake news/whatever.

Hell, I'll go as far to say that controversial/commonly mis believed concepts are the most worth discussing in a public forum. For instance, consider white supremacy: if you give people a choice between social groups and make them feel welcome, they can use simple, self-serving logic to join the one that meets their needs. White supremacy doesn't help white people, rationally, it's just stupid, except for very convoluted goals. See: https://www.nytimes.com/2017/08/22/podcasts/the-daily-transc...

If you don't enable conversations between white supremacists and anti-racists, there is no real choice--they'll gravitate the group where they're accepted.


> White supremacy doesn't help white people

It absolutely does help the people who believe in it: it gives them a sense of superiority and someone to blame. People walked away from lynchings with a sense of satisfaction and justice, not a recognition that they'd just committed a horrible crime.

These aren't rational reasons, but since people can go through most of their lives without really having to do things for rational reasons that doesn't matter.


>For instance, consider white supremacy: if you give people a choice between social groups and make them feel welcome, they can use simple, self-serving logic to join the one that meets their needs. White supremacy doesn't help white people, rationally, it's just stupid, except for very convoluted goals.

Doesn't the fact that white supremacists groups still get members contradict your idea here that people won't join them?


> Doesn't the fact that white supremacists groups still get members contradict your idea here that people won't join them?

I think their point was that people join because they aren't welcome anywhere else; if there were other places where those people were welcome, maybe they wouldn't join those groups.


"Why is that a problem? "

Ask the parents of children murdered at Sandy Hook.


When you say it’s like this one way but not the other, you have conceded to relativism. You have basically said there is a time and place for stealing someone’s wallet, and that acting as a moral agent in society can be suspended under the right conditions.


Is that a valid argument though? Of course things should be judged on a case by case basis, and not on dogmatic one size fits all rules.

Even courts take context into account for the same action (e.g. whether a murder was in cold blood or premeditated, whether someone stole because they were hungry or just greedy, etc).

And, yes, "there is a time and place for stealing someone’s wallet", as well.


But it's equally illegal to kill anybody based on race, class, political belief or religious belief.


Is that true? If the murder is racially motivated and thus a hate crime, isn't the sentence longer?


It's equally illegal to perpetrate a hate crime on anyone of any race, nationality, gender, etc. It's a bit harder to do it on people who are not in a systemically-discriminated-against class, but if you find a way then it's a hate crime like any other.


That's a comparatively recent law though. Hasn't been always the case (e.g. colonialists used to have "open hunting season" for native populations at certain times and it wasn't at all like killing one of their own race). And I doubt in the 30s South killing a black person was "equally illegal" as killing a white person (if not in law, surely in practical legal outcomes -- people just getting a slap on the wrist).

As for those other things (whether it's equally legally bad to kill someone regardless of "class, political belief or religious belief") all of those have been contested in actual law across societies (including western ones), even 20 and 30 years ago (and tides can always turn one way or another again).


Do you have a point? Your comment isn't applicable to the conversation we're having in this thread.


Tell that to the soldiers spreading democracy.


It's equally illegal for soldiers to kill non-combatants.


I think most people would agree that there are indeed times and places when stealing is acceptable. It is never ideal, but sometimes the alternatives are worse. The bad sort of relativism is a weakness of principles, not the same set of principles leading you to prefer different actions in different circumstances.

For another example of this kind of relativism, I would harm somebody in self-defense or the defense of another, but I would not harm somebody just because I didn't like them. The underlying principles are the same even though the right action is conditional on the situation.


I honestly don't understand the connection between what you said and what the person you're replying to said. Can you explain it another way for me?


Scale affects things. An audience of 12 vs 12 million is a significant difference.


Or, it could be that the two things being compared have entirely different contexts to them, and thus do have different rules.


This is the road to rule by men and not laws, since at that point any politician can make up whatever "context" they like to interpret the law any way they see fit.

Be careful about the precedent your setting and think about who may wield these powers in the future. I'm sure President Alex Jones will think it quite necessary to make exceptions to ordinary laws and rules when taking down the global pedophile cabal is at stake.


There is a difference between thinking that a certain social medium, "works" because of moderation (such as HN), and between thinking that NO uncensored platform should ever exist.

I enjoy lots moderated platforms. But I still think it is important that uncensored platforms like gab exist at all.


Is it possible for a bad idea to be true?


I think part of the implicit understanding in this thread is that part of an idea being "bad" is that it is not true.


[flagged]


I'd add it's not just HN, but most social sites are far into self-censorship territory. I agree though, HN is much more hostile these days.

WRT dang, even today he shut down a thread regarding the deepening SEC inquiry into Musk (which had new news as of this afternoon), and said that this was something along the lines of "Musk hysteria" or some such thing.

I was kind of shocked at the removal of that thread as it was covering a new development, and was not, as dang put it, a dupe or hysteric.


I rarely agree with dang, but he is correct in this case.

Media should take a break with reporting about Musk, most of the recent articles are either oil-sponsored FUD about Tesla or personal attacks on him.

Would be nice if Elon took a break from Twitter too...


> A common argument in philosophy is that bad ideas should occur in the open - where everyone may ridicule them.

Not to get too far off topic, but is this actually true in practice in the real world? Scientific studies have shown that the idea that gets repeated the most often is the idea that "wins". A bad idea gets repeated and spread every time it gets criticized. If the idea proves more "credible" than the criticism of it, then all that criticism does is contribute to the spread of the idea it's trying to kill.


I would like to know this as well. Case in point, when I was a kindergarten teacher, kids wouldn't have a concept of race. I'd have white girls asking me if I could put their hair in braids like "the tan girls."

But by 4th grade I'd be teaching about slavery, racism, etc, and the kids would be then learning about a concept that was utterly nonsensical to the non-racism-indoctrinated kids. So I'm wondering what would happen if I never taught about racism at all.


Unfortunately, a lot of racism is picked up at home.


> by 4th grade I'd be teaching about slavery, racism, etc, and the kids would be then learning about a concept that was utterly nonsensical to the non-racism-indoctrinated kids. So I'm wondering what would happen if I never taught about racism at all.

Assuming these kids are in the U.S., then you omitting the topic from their 4th grade class would only omit a tiny portion of what they encounter in their lives. If my 4th grade teacher omitted something from history class, I don't think it's possible that I've never learned about it (assuming it has some significance).


Man, I grew up in a third world country where race was a complete non-issue (unless your color made you stick out like a sore thumb, in which case, you'd be a celebrity, not get picked on).

Coming to the US is what made me encounter racism. American society really is broken from within when third world countries can get around this but America can't.


I think this is a bit of apples and oranges. There are vast differences between pretty much any two cultures.

I spent three years living and working in a third world county where I “stuck out like a sore thumb”. Most people I met assumed I was rich, despite living and working on the equivalent of a local secondary school teacher’s salary. And this was still a hell of a lot more than most people in the country were making, but my local counterparts did not have to constantly deny requests for money and things from strangers.

Perhaps a more apt comparison to racism in the US would be sexism in the country I worked in. Men would cat call women constantly to ask for (or demand) sex and marriage. This was just an accepted and normal part of the culture there. It was crazy.


Well, the question is specifically discrimination on the basis of race, so apples to apples there.

Maybe other countries score lower in sexism. (btw America does too, we all know that :) ) However, it still doesn't take away the fact that the extent of racism in America is far too high for a first world country.

As much as it pains me to say this, at a societal level, I don't see any difference between my third world country of birth and America. The only difference between my home country and USA is interstate highways.


Do the people downvoting this disagree that America has too much of a racism problem for a first world country, or what's going on here?


> Do the people downvoting this disagree that America has too much of a racism problem for a first world country, or what's going on here?

Unfortunately, that has been my experience. A lot of seemingly well-intentioned Americans refuse to accept their follies. They are completely unaware/unwilling to admit that the rest of the world is watching what's happening in America.

This is possibly due to the ingrained, "American exceptionalism" belief but I am willing to be proven wrong.

As a comparison, see this Brazilian admitting flaws in Brazil: https://news.ycombinator.com/item?id=17735229

But Americans admitting racism? Nope.


FWIW, I think the real question is whether or not people’s inclinations can really be changed by exposure to information. To me is seems more likely that people seek out the information which confirms their world view. And if that’s true, then I think the imperative to shut down hate speech is greatly undermined.


People do seek out the information which confirms their world view, but people also can really be changed by exposure to information. Read accounts of racists who have reformed. It's always the same story. Their beliefs were slowly chipped away by exposure to information and (most importantly) a diverse group of people. They spend time with one person who doesn't conform to their prejudices and then another and eventually it clicks for them.


Right, but what about the people who don't reform? Are they somehow not exposed to all the same stuff as somebody who does reform?


Well, yeah. A lot of racists never get to know the people they claim to hate as actual people or even interact with them beyond the most superficial encounters (passing one on the street, having one hand them change change at the store, etc). They even tend to stick to media which leaves them comfortable in their world view. Getting them out of their comfort zone is crucial for change (likely true for everyone really). Not everyone will change either. There are some remarkably stubborn people, but the only chance any of them have involves them being confronted with information outside of the narrow echo chamber they live in.


You can never stop bad ideas from spreading, but you can decide if you'll ever hear about them or have the opportunity to fight them. If the most repeated idea wins but bad ideas are forced to spread in hidden places without refute then bad ideas are the only ones getting any voice and they win by default.


> but bad ideas are forced to spread in hidden places without refute then bad ideas are the only ones getting any voice and they win by default.

They win by default in those dark places. Important distinction.


The people in those dark places the ones you have to worry about. Most people don't hold dangerously wrong ideologies like neo-nazis do. What amount of rhetoric or marches or speeches would convince you to be a nazi? If you are like most people no amount ever would. The most vulnerable out there are the ones who find themselves isolated. Those who end up in the dark places or are simply raised in them from childhood. They can be reached, but not if they never hear a dissenting voice or have an opportunity to have the views they were taught challenged.


Nazis in dark spaces have always existed - their modern grasp of distribution on internet platforms is what has changed.

For years stormfront was a well known butt of jokes about racists, but it existed. What has changed is nazis have started using the new distributive platforms (youtube, facebook, twitter etc) to spread their word outside of the dark space.

All we need to do, is deny them that distribution. Motivated racists will still find their stormfront but they'll stop achieving the bycatch of people who didn't have the skills/motivation to seek out other racists on the internet.

Without distribution their views are repugnant but largely ineffective.


How many videos would you need to watch on youtube before it turned you into a nazi? The people watching those videos are already true believers, or they are researchers, police, and others concerned about what racists are saying, how many of them there are, how they operate, etc. It's invaluable to have ready access to what they're up to. Access that would be lost if they were forced into their own closed circles away from public view.

Now someday one of the racists on youtube might see a reaction video in the sidebar and they click it expecting "librul tears" but end up leaving with something to think about. Youtube videos aren't usually how racists find reform, but every little thing that chips away at the lies they've been told helps. If they were forced to watch their racist videos on a racist only video site they'd never have a chance to see anything else.

In every last story of a racist who changed I've ever heard, the thing that turned things around for them was something breaking through that racist's echo chamber. Forcing racists further into their own isolated groups is the opposite of fighting racism.


> The people watching those videos are already true believers

No they're not. The YouTube algorithm (for example) encourages people to slowly ramp up how extreme the content they view is little by little.

https://www.theatlantic.com/politics/archive/2018/03/youtube...

> but every little thing that chips away at the lies they've been told helps.

You're assuming they even see this content. The way "the algorithm" works makes it increasingly unlikely that platforms will even show you things that oppose your view.

And if they do... the backfire effect means they'll often reject your opposing view even harder. https://youarenotsosmart.com/2011/06/10/the-backfire-effect/


It is true that the algorithm does work against them, but it isn't a perfect bubble. I'm sure every racist on youtube had that same casting an obsidian sword video recommended to them I've been seeing. I spent a couple months looking into the alt right a couple years ago and I saw it for myself. The algorithm is just going by what gets the clicks so when a major topic comes along it's not too uncommon to see different sides pop up as long as they are on the same subject.

The backfire effect is real, but it's obviously not impossible for a person to change their views on something. Even something somewhat fundamental. It can take a long time for a person to come around and they need to come to terms with the truth on their own but it can be overcome. Even the things they don't accept today will be in the back of their mind when they come across something similar in the future. It's not easy to make that kind of change, but people do it all the time.


I don't think you could show me enough inane neo-Nazi videos to make me a Nazi. Insane, ready to gouge my eyes out with an egg beater... maybe... but conversion is not likely.


This seems to be a common enough retort in this thread, but it's really just a single person's opinion about themselves that doesn't mean much. You think you're not susceptible, good for you!

The topic of radicalising people via mainstream social media to become nazis or fight for ISIS or whatever gross extremism they're susceptible to has been around for years now and it's pretty much settled that yes, you can radicalise people via these channels.


The question not being asked is: what makes people susceptible to radicalization?

I think it's telling that America's potential for radicalization might be comparable to that of a former war zone like Iraq. We are not a healthy society. De-platforming will not make us a healthier one.


> I think it's telling that America's potential for radicalization might be comparable to that of a former war zone like Iraq.

I was actually referring to the many British & Australian citizens that were radicalized and traveled to Syria to fight for ISIS.

Removing the ability to broadcast these vile ideologies in to mainstream channels will certainly reduce the incidence of those susceptible to radicalization discovering them and being converted.


Same question applies I think. My post was US-centric but the same issues exist to varying degrees across the developed Western world. Check out the unemployment rates in parts of Europe.

My overall point is that we should be looking inward if vast numbers of people are being radicalized. It means our culture is simply not compelling enough to overcome the allure of these cults.


> My overall point is that we should be looking inward if vast numbers of people are being radicalized. It means our culture is simply not compelling enough to overcome the allure of these cults.

Oh I very much agree with you - there has been a fundamental change in as little as one generation as to what 'life' is for men, with traditional roles thrown out the window. Combine this with difficult economic times for young men and it's a recipe for this kind of thing.

For those who can't forge their own path they'll find someone that accepts them - sometimes good sometimes bad. Often these extremists offer belonging and self worth to those who have been unable to build it themselves.


> What amount of rhetoric or marches or speeches would convince you to be a nazi?

There's a historical answer to this and it's found in the number of people who were convinced to join and vote for the Nazis in 1930s Germany. They were very good at propaganda.


then bad ideas are the only ones getting any voice

That's irrational, resting on the assumption that good ideas which encounter opposition in public discourse thereby cease to exist.


I'm not saying anything about ideas ceasing to exist. Only that ideas discussed in closed communities which have become echo chambers are never challenged. Members of those communities will never hear anything but the rhetoric of the group they are isolated to.

This is just as true of good ideas as it is of bad ones, but that's far less of a problem because groups of people spreading good ideas among themselves isn't typically dangerous, although there are a lot of advantages to having diverse perspectives in every group.


This called the Rebound effect[0]. There is a Debunking Handbook[1] for dealing with this phenomenon.

There is also the idea that a lie travels around the globe while the truth is putting on its shoes, which spurred /r/AskHistorians to ban holocaust denial[2].

[0] https://en.wikipedia.org/wiki/Rebound_effect [1] https://www.skepticalscience.com/docs/Debunking_Handbook.pdf [2] https://slate.com/technology/2018/07/the-askhistorians-subre...


Apologies for the brain fart, it's not the rebound effect, it's the backfire effect[0]. I can't edit my post anymore.

[0] https://rationalwiki.org/wiki/Backfire_effect


Now figure out how debunking works when the guy putting out hate speech can block at will and/or controls the website.


Without judging on the validity of some of the statements made/cited in the "Debunking Handbook", I can't help but wonder, how many of the psychological experimented mentioned in it are part of the reproducibility crisis.

Reading that handbook, whenever a social experiment and its outcome was mentioned, my initial thought was: "Hmm, interesting. I'd like to reproduce that experiment… several times in fact, and fiddle with the parameters, to see, if and iff so what circumstances may influence the outcome."


That common argument rests on norms established by millennia of relatively difficult communication. Best not to assume that the same people would come to the same conclusions today.

For example, QAnon believers have not had their speech restricted. That conspiracy theory is as open as anything you could name. People have challenged it. It's still going.

If you believe that challenging terrible ideas makes those who hold them think twice... why hasn't the exact argument you're using gotten rid of the terrible idea that we should restrict speech? Answer: because logical refutation doesn't work.

I am not arguing against the slippery slope. This is a very, very hard and risky problem. I am refuting the misconception that shedding light on bad ideas makes them go away.


A number of other silly ideas that still hold traction and do actual damage despite being out in the open and actively refuted:

Moon landing conspiracy

Anti-vaccers/Vaccines cause autism

Fad diets and "detox"s

Anti-intellectualism

Alex Jones (to the tune of millions of subscribers on his recently killed youtube)

Global Warming

etc


And who decides what is and isn't a silly idea? For 40 years, Americans were told by the government to believe that low fat high carb diets were the key to a healthy life. Not only is that not true, but it's never been true. There was never any evidence to support that stance. And yet, that was the official government policy and subsequently the commonly held belief for 40 years.

"It ain’t what you don’t know that gets you into trouble. It's what you know for sure that just ain't so." - Not Mark Twain


And who decides what is and isn't a silly idea?

This is a bad faith argument. I say so because the purported mechanism for such decision-making is actually spelled out upthread, and because this rhetorical question is then followed by a glaring fallacy of composition that seeks to draw a general conclusion from a single and highly over-simplified example.


So, the anti-GMO movement, far and away has more blood on its hands than even Alex Jone's dumb theories (see golden rice.) Should we police that speech? I believe anti-GMO is a silly idea, and a deadly one. Far more deadly that Jones claiming Sandy Hook wasn't real, far more deadly than a 9/11 conspiracy theory. Yet, we allow it, in fact in many cases we encourage it on the corporate/company level (see Chipotle.)

There are tons of "mom blog" type pages on Facebook that push this crap, it's a much much bigger industry than Jones. Where is the outrage there? Is it ok to ban their speech from a corporate platform? It does way more societal damage. Gwyneth Paltrow pushes things with her company "Goop" that can cause serious health problems. Should we ban her from Facebook? I just wonder where you draw the line, because if we're talking raw statistics on society damage, InfoWars or virtually any form of "hate speech" isn't even in the top 10.


It’s not a silly example but rather a common one. Many years ago it was only the fat that were attractive because they were the financially secure. Many believed that royalty were th only form of legitimate government, and colonial activity among “savages” was doing them all a favor.

Science, academics, and even personal experience vary with time. Not all variance that follows is truthful...some is exaggerated purposefully or accidentally, though most is considered good information at the time.


I didn't say it was silly. Please try to differentiate between the quotation that I'm responding to and what I said in response.


> And who decides what is and isn't a silly idea?

Any private, non-common-carrier party that is asked to relay it.

That is what freedom of speech/press means: people are free to evaluate content and decide for themselves whether to participate in spreading it.


I think that's the core question right there. For a while, various governmental bodies got to be the authority figure deciding which ideas were widely propagated. It's interesting to consider the outsized influence of Texas school boards here; the people who decided which textbooks would be used in Texas were elected officials. Texas has a huge textbook market. Accordingly, textbook vendors were incentivized to publish textbooks which fit Texas standards, and other states who didn't have the same influence wound up with those textbooks.

So: "who decides what is and isn't a silly idea?" Texas elected officials.

That's suboptimal in my view. It is also not great that four or five major companies have the same decision-making power. Fixing the problem is difficult, but I take some comfort in knowing that it's not actually a new problem.


That's all fine and dandy, but is there actual factual standing behind the positions I listed?


You posted them as examples of "bad ideas." Shouldn't you be able to tell me the factual standing behind the positions you hold?


Are you asking me to go through each of them and give a thorough post about the background of the issue, the facts as they lay, the most reasonable conclusions, the possible influences/biases from each side and their possible agendas etc etc etc?

And you're saying that I should do this each and every single time someone wants to discuss a (IMO) discredited study or blog post or something?

Do you not see how easy that is to abuse by people acting in bad faith? It's a pretty classic DDOS

https://en.wikipedia.org/wiki/Gish_gallop


Isn't that what you tried to do to me? You asked me this: "is there actual factual standing behind the positions I listed?" First of all, why are you asking me about the factual standings of your examples? And second, you didn't just allege that the ideas you listed were 'silly'. You said they do "actual damage" despite being out in the open and actively refuted. Is it so much to ask that you provide proof? Had you just said "I think these are silly ideas" we wouldn't be having this conversation.


Hang on, just to check, which of the specific things called out as "bad ideas" do you take issue with?


Well, this sort of gets to the heart of it, doesn't it? A lot of what's being called a problem is basically people who go Gish galloping; they spew easily-refuted statements, rarely if ever offer any evidence or anything else to back them up, but if you so much as hint at the possibility that perhaps they might have said something a little bit mistaken, they come down on you like a ton of bricks with demands for citations and evidence. And on a platform like Twitter, it's not even one person who does that; wade into the wrong thread and you might have hundreds or thousands of people mobbing you with "citation needed" and ignoring any evidence you don't personally direct at them and them alone (and that's being charitable; most of them ignore evidence, period).

This wears people down quickly to the point that they just give up on challenging incorrect information, and then the people who spew it win by default and take over whatever platform they're on.

Why shouldn't platforms have the freedom to put a stop to that if they choose? What value is there in claiming "neutrality" on this (and really, there is no "neutral" -- there's only enabling it, or not enabling it, the platform's owners don't get to claim they're just bystanders when they're, y'know, providing the platform)?


> Why shouldn't platforms have the freedom to put a stop to that if they choose?

That's the heart of the question being asked. On the one hand, there's freedom of association. On the other, freedom of speech. Which one trumps the other and to what extent?


Spam is a form of speech.

For years, most providers and platforms have banned it, and people have organized to boycott providers who didn't.

Why is that apparently OK but other forms of organized boycott, or disassociation based on the content of speech, are not?


If the problem is "what determines truth", then there isn't an organization out there that could resolve the issue (barring something like the Chinese government, I suppose)


Did you just organically come up with the Ministry Of Truth?


I think their point was that unless you're willing to accept Minitrue you don't have an authoritative source of truth to check against.


Note that most of these existed well before the internet and were just as widespread. Generally the people who post this stuff are the people already preconditioned to believe it. These people also have their views emboldened when they see it is banned...they view it as some conspiracy to hide the "truth." IMO, these bans against Alex Jones aren't going to have the good intended result they were meant to. If anything, this could drive more direct ad dollars to him and allow him to expand into another vertical that may attract and empower even more like him.

But I'm biased, I am generally against any form of censorship unless it is exploiting someone against their will (see snuff films, pedophilia, etc.) Slightly unrelated, but I fear that the Russian boogeymen is going to creep its way into outright censorship in the West via corporate America...which ironically is an outcome Putin and other Western adversaries desire.


> Note that most of these existed well before the internet and were just as widespread.

Do you have a citation for that? It seems unlikely that the Internet didn’t increase the reach for fringe views by making them available to everyone rather than just the few who requested conspiracy materials by mail.


> just the few who requested conspiracy materials by mail.

I remember conspiracy theory / UFO magazines being more widely distributed than regular newspapers. There's even a joke on that in the first Men in Black movie.


Was that ever more than a joke, though? The National Enquirer sold plenty of issues but almost everyone knew it was entertainment rather than a reliable source. Similarly, while bookstores had books & magazines they tended not to shelve them with more legitimate materials.

The other big difference was the barrier to entry: printing things cost real money and anything at all mass-market required a physical presence which could be sued for libel. In contrast, it’s much easier for anyone in the world to slap together a decent enough website or upload a video to YouTube and Google will feature it right next to official NASA content. Sure, anyone who wants to can learn that the source isn’t credible but the last decade really underscores that most people won’t check.


Are you really suggesting that conspiracy theories such as JFK, Roswell, Area 51, Antivaccine/"mark of the beast," Moonlanding, Big Foot etc weren't already in popular culture before the internet was popularized? Because, in fact, many of them spread because there was no wide usage of the internet (Roswell, probably being the #1.) It helps create lore. Even X-Files, a show that I adore, was really pre-popularized internet era.

Think of Ruby Ridge, Waco, etc. Those conspiracy theories led to the Oklahoma City bombing. And they were widespread prior to the internet being a big thing. Think of the Turner Diaries which was big in the early 90s, there are too many examples to even list.

9/11 was maybe the first popular internet spread conspiracy, but I would make a strong argument that without the internet it may have been more widespread because there is less online debunks.

I don't know if you've ever argued with a conspiracy theorist but their minds were made up before any major theory was proposed. It's a natural distrust of authority.

Do I have hard data? No, and I am willing to be that it would be incredibly hard to measure that anyhow, but I do have tons of anecdotes and history to back up the claim that it's nothing new. It's human nature.


re: "Think of Ruby Ridge, Waco, etc. Those conspiracy theories led to the Oklahoma City bombing"

What part of the admitted shooting of Vicki Weaver while holding an infant in her hands, and of 1 of Randy Weaver's under-18 sons, by FBI snipers; and the use of explosive/flammable igniters to disperse "CS gas", a form of tear gas, used against people inside wooden buildings in Waco [2], are conspiracy theories?

Even the US Gov't admits to the basic facts of both situations...?

[1] http://www.cnn.com/US/9510/ruby_ridge/ [2] http://www.cnn.com/US/9908/24/fbi.waco/


I think they may have meant that the conspiracy theory that an interconnected federal government planned Ruby Ridge and Waco, and planned more such incidents, led to the OKC bombing. What the government did was wrong and arguably it was systemically predisposed to do wrong in situations like that. But it wasn’t planning more such actions, the OKC bombing wasn’t connected (I think), and even if so the belief that bombing, rather than protests, court cases and elections, was the right thing to do, was delusional.

But if I’m wrong and they think that it’s a conspiracy theory to think those situations were mishandled or shouldn’t have been instigated, then thank you for bringing up the facts; we shouldn’t be credulous and assume US Marshall and ATF agents are competent or just because we saw them on TV.


Exactly, Ruby Ridge and Waco for that matter were completely bungled operations, and imo were largely unjustified hamfisted responses. However they weren't inherently nefarious, just sheer incompetence and cowboy minded policing. Yet their mere existence fueled a far right movement that believes they were. In an unfortunate turn, the OKC bombing was a direct result of that. It fell right into the preaching of Koresh as well as the popular right wing writings "The Turner Diaries." That was more or less my point, and all of that occurred well before people had regular internet access.


Well, that sheer incompetence indeed disappeared when it was time to cover all this up. Was that nefarious? Because this is what eventually motivated McVeigh, not some strange conspiracy theories. Let me quote:

"I waited two years from "Waco" for non-violent "checks and balances" built into our system to correct the abuse of power we were seeing in federal actions against citizens. The Executive; Legislative; and Judicial branches not only concluded that the government did nothing wrong (leaving the door open for "Waco" to happen again), they actually gave awards and bonus pay to those agents involved, and conversely, jailed the survivors of the Waco inferno after the jury wanted them set free.

"Other "checks and balances" likewise proved futile: media awareness and outcry (the major media failed in its role as overseer of government ally); protest marches; letter campaigns; even small-budget video production; etc. - all failed to correct the abuse."

Source: https://www.webcitation.org/5wow5v4MK?url=http://www.foxnews...


The initial intent was not nefarious, the execution was incompetent, the cover up of the incompetence was nefarious. That's not the same as saying the ATF torched the place to the ground, which isn't backed up by anything...and that's what McVeigh believed.


I’m suggest that we’re seeing things spread wider and faster than before, and that it’s easier to make things look legitimate because a lot of people assume things on YouTube or Facebook are vetted in some way.


Which is not an unfair assumption given they do remove some things.


[flagged]


In your understanding, did millions of jews die in the holocaust as part of a German plan to eradicate a race?


Log-lost diary of German officer Alfred Rosenberg vindicates his testimony at Nuremburg.

https://www.washingtonpost.com/local/long-lost-diary-of-nazi...

> Scholars had been eager to see what this longtime Nazi from Hitler’s inner circle had to say in the missing journal. But details of the Nazis’ grand plans for genocide and brutal domination are absent from the pages.

Rosenberg testified in Nuremberg that there was no such plan for the wholesale murder and eradication of the Jewish race, and that the idea had never even crossed his mind. In other words, the only plans the Germans had was to expel the Jews from their territory. Nonetheless, he was not believed and sentenced to execution by the court.


You missed this parts of the article:

> "Rosenberg does occasionally raise the issue of Jews. On March 28, 1941, he referred to the opening conference of his brainchild, the Institute for Research into the Jewish Question.

I regard the conference as a success,” Rosenberg wrote. “It is, after all, for the first time in European history, that ten European nations [were] represented at an anti-Jewish conference with a clear program to remove this race from Europe.

“And now this perception of a historic necessity is backed up by force.”


[flagged]


Holy hell, an actual Holocaust denier.

You know, there are survivors of these extermination camps still around; some even speaking of their experiences. All liars in your warped world view?


[flagged]


Amen.


So?

Laugh at them, and then contribute facts.

Next.

At any given time, these things are all part of the conversation. Suppressing them only adds to the allure.

Attempting to manage it, "for their own good" will not end well, and it won't end well because way too many people will fail to trust, share that and undermine it all.

As they should. That problem is why there is a First Amendment in the first place. I know that does not apply here, and I know why too.

Not my intent.

The dynamics that lead to having a First Amendment are my point.

The moment people begin to mistrust, and that catches fire and spreads is the moment the dialog gets toxic and hostile.

See my comment up thread.


And people like my mother? Who didn't do so well in statistics and science and math and therefore do not have the tools to rationally judge one claim versus another?

How do we help them not get pulled into, often alluring and attractive, circles of hate?


I know you're sincere, but do you see the danger in what you just said? Restricting ideas your mother can hear, based on your assessment of her competency to understand them "properly"?

That's a very dark path that does not lead to utopia.


I love you mom, but those people are fucking clowns.

How can I help?

(Have had this exact exchange. Took a while, but did resolve. Human problems take human work.)


Lane Davis's dad tried that approach, arguing to his son that he (the son) had fallen down a neo-nazi rabbit hole. Davis denounced his dad as a 'leftist pedophile' and stabbed him to death. I could supply a laundry list of examples, but the basic point I'd like to communicate is that once someone signs up to an eliminationist philosophy we have (to misquote Churchill) established what they are, and the only question is in what order they intend to pursue that project.

I want to be careful here to distinguish between ideas that mre merely wrong and those that involve imposition of their calculus upon others. For example, I consider flat earthers ridiculous but it's not like they are threatening to flatten out anything they consider overly curved.



If it's actually criminal speech, enforce the law.

If it's not actually criminal speech, the answer is more speech.


That's a very nice platitude, but as reality has shown, it's a terrible rule.


It's not a platitude. And it's actually quite a good rule, when enforced consistently. That was not done in recent cases.

And, like I said. I'm just going to watch it play out. I'll be on the tail end of this thing to talk about it then. When people are more receptive.


Let's just say I am not getting upvotes on this. :D

Rather than actually have a real discussion, people prefer to "solve the problem."

It won't go well.

I will bow out of this discussion for now. There are places where a real conversation can be had, and this isn't one of them.


No, you're being pointed out that you're wrong. That "more speech" doesn't help, and can actually get people killed.


I have no complaints about the discussion I can see. I have no desire to go through Karma on people who would just rather down vote because they don't like what they read. Personally, I almost never downvote, would much rather engage and have a chat about things. Most of the time that happens here, on this topic it's obviously not.

And the whole thing is by no means definitive. I happen to have a very significant amount of experience with this.

If the speech is Criminal, then it can get people killed, (among other non trivial harm) so apply the law. Secondly, apply the law where it needs to be applied. Using a bludgeon, where Precision Instruments are more effective, only makes a bigger mess than intended.

There is a huge difference between, "fuck this guy we're done", and we're removing these because you're over the line. Even worse, what was done to Jones, is the bludgeon. And it was done without the precision, and consistent action needed to give the feedback necessary to justify and help people understand what just happened.

There will be negative and severe implications from all of that. It may seem fine now, but some precedents were just set, lines crossed. I'm not sure people thought that all through. Some important trust was just lost.

That stuff is different from speech that isn't Criminal, where the answer is more speech. It is also a matter of people learning to use all the tools they have, not just righteous indignation.

I'm not wrong at all.

However, I can tell the dominant mode here is some people think they can control other people. They also happen to think they know best for others, neither of which is true.

At the same time, recognizing our own personal agency in conversation, and educating people about that, and all the good it can do, is being near completely ignored in this discussion. That's an error.

And they are just going to have to learn the hard way.

I'll Stand By and Watch.

Wrote this on voice, sorry for the various spelling and other errors in this comment.


None of this is the slightest bit responsive to the point I made that some people;s response to losing an argument is to engage in violence. There's no law to enforce here, nobody suggested that Lane Davis should be arrested for the holding Nazi opinions. It's a simple observation that the solution to bad speech is not always more speech because people who are engaging in bad speech in bad faith often aren't interested in making progress through dialog.

That's why online trolls often make arguments that are empty of meaning but cling to a posture of superiority - they are not interested in winning an objective argument, but in having the appearance of controlling the conversation, so that if the honest interlocutor gets exasperated by the trollish arguments, the troll claims victory. Provocateurs at real life demonstrations engage in the same tactics. Supporters of authoritarian rulers revel in the discomfiture of their political opponents rather than any objective improvement in their own conditions.

Now, if you want to make general policy points and articulate your views that's fine, but if you're going to simply ignore your conversation partners and beat down straw man arguments of your own creation, why should anyone take your nostrums seriously?


As a general policy, we could return basic critical thinking to primary education.

Mine included First Amendment issues along with the following:

Recognition of propaganda forms. Done in the context of ADS at first, then later political media.

Agency in conversation. How to weigh words. Being called an ass by a clown, for example. The most common, basic response is righteous indignation. However, a better response is, "meh", or laughter to better identify the laughable. This was done in the context of ordinary conflict and media reported events.

Bias. There is always bias in media. Objectivity is something expensive and it takes time and people working together to actualize. Is that bias honestly represented? Secondly, how does that bias color conclusions or advocacy present? This was done in the context of news media. Was it from labor point of view or business, other? (Economic) For social, similar questions were asked and material identified.

Today, for example, few Americans realize there is almost no reporting from the labor point of view. There used to be. What jas changed?

Reasoning, fallacies, etc... the basics like what one may find at critical thinking dot org sites today.

My own kids did not get any of that. I provided it.

Regulating speech based on ehat could happen, or blanket assumptions about people being feeble, or some other condition will not end well. I will leave it there.


As for Lane, human work caries risks.

The control of speech needed to take that stabbing off the table is excessive.

Real conversations do involve real people who will do real things.

What I put here works.

I have done it with a gun pointed right at my chest, loaded, cocked.

That is no joke. And I could argue I had better tools to work with than the father did. Odds are favorable to me in that case too.

I have a solid set of intense experiences and some training in this area to draw on too. Did the father?

Maybe he should have those things. I would gladly fund them as part of my taxes. Money well spent. While at it, mandatory, blanket gun education would do a lot of good too, and in similar ways, for similar reasons, by way of similie here.

To me, bringing up one ugly fail amidst a sea of successes is counterproductive.

It is also rooted in fear of risks.

With people, speech, there are always risks.

Empowering people, doing more early human work will do more to break down those risks than taking a bludgeon to speech will.

Just one of a few reasons I felt it better to just step away.

We may just have to go through a draconian cycle here. I am not happy about that. And it will not play out like well meaning people think.

And I will be there at the peak to assist with the move back.


> logical refutation doesn't work.

It can work, if-and-only-if if it is done after real communication has been established. Most people trying to refute "obviously wrong" ideas skip the first step and start with arguing in a language and framing that they understand. This is noise when the other person uses a fundamentally different framework for what counts as "facts", "evidence", "authority", etc.

One of the better explanations of what I'm talking about is this[1] essay. While it discusses the creationist "debate", the ideas about trying to educate people that are resistant to science and logic apply generally.

Yes, logical refutation rarely works. However, did you bother to learn their language so there is a foundation of actual communication to build upon? Or did you argue in a language they don't understand? Even worse, did your argument appear to contain a lot of phatic expression that is hostile from their point of view?

[1] http://scienceblogs.com/clock/2007/05/31/more-than-just-resi... (see the heading "Hierarchical View of the World" and "The Problem of Language" for the main argument)


>I am refuting the misconception that shedding light on bad ideas makes them go away.

It doesn't make them go away. It allows them to be refuted - which hopefully stems at least some of the indoctrination that would otherwise occur only in private.

If everyone I've ever known tells me vaccines are evil and I never am told or shown otherwise - I'll grow up continuing to think vaccines are evil. If you refuse to let me speak about vaccines that only further proves that you're evil and trying to hide the truth - and strengthens my indoctrination. Now, poor arguments against anti-vaccers also serve to strength the indoctrination so there's some bad with the good... (Also just to be certain on things: "you" and "I" are used generically for easier writing, I support and believe in vaccines.)

There are also plenty of bad ideas that do die to public scrutiny - think of anytime anyone has ever been talked out of "doing something stupid" because they shared what they had planned to do.


> It allows them to be refuted - which hopefully stems at least some of the indoctrination that would otherwise occur only in private.

This does not appear to be the case. Consider the math, if nothing else:

Bad Idea Public convinces .1% of those who hear it and those people are irrationally convinced.

Bad Idea Private convinces 10% of those who hear it and those people are likewise irrationally convinced.

I'm assuming the private bad idea is 100 times as effective, which may or may not be accurate. Even so, as long as your private circles are smaller than ten million people or so, we're better off than we would be if your bad idea was hitting a billion people on the Internet.

I'm still not arguing about the difficulty of telling which ideas are good and which are bad.


> If everyone I've ever known tells me vaccines are evil and I never am told or shown otherwise - I'll grow up continuing to think vaccines are evil

Understand that forcing vile ideas into obscurity and forcing people to only ever see vile ideas are not even close to being the same thing. You could even say that they're almost opposites.


Thought experiment for you:

Say we remove Qanon from all platforms of reputable note. Do you believe 4chan would ever comply?

Secondly, what impact does that have on the reputable platforms vs 4chan?

How does that all break down for people?

My thoughts are:

1. 4chan will continue very liberal speech. People will continue to visit, and those numbers will increase due to the Streisand effect.

2. The number one and two responses to Alex Jones recent and wide spread ban are:

Who is next?

FINALLY!

And those trend along authoritarian lines most clearly. Anti authoritarians are more critical and make it clear they do not need to be protected from bad speech.

Authoritarians make it clear they value said protection because of what could happen.

I bet the same happens with QAnon, should that decision get made.

Net outcome:

Polarization along authoritarian lines.

Frankly, I think Jones is laughable. I also think specific speech he made should be removed because it is defamatory and or inciting in nature.

I think a blanket, "fuck this guy" is a grave error.

I think that because:

The answer to bad speech is more speech, and post removal is a form of speech, given it is used where speech is actually criminal. And it is understandable too.

Bans are not those things and can create martyrs as well as augment attention to other platforms where fans, supporters will quickly gather to discuss injustice.

I think the laughable should be laughed at too.

Civility is difficult here. Forcing that, as in nobody gets offended, dilutes speech down, and prevents "more speech" from working as it should.

That all forms a resonant cycle we probably will not like very much.

At the root of all this is forced trust and an abandonment of safe harbor. That will not be easy to undo, and it will divide people into those authoritarian vs non authoritarian camps too.

I gotta be honest and will gladly side with the anti authoritarians.

Do not need the protection, and I have a very, very thick skin. There are few ways to actually impact me, and I do understand my agency in conversation well.

I can't control others, but I can control me. Bad speech from clowns and asses is entertainment. It isnt something I give weight to. Nobody should.

That they do, and that they are enabled with this idea of protection being somehow needed is likely to do more harm than good.

Anyway, the Qanon types will find and communicate where the speech is, just as the Alex Jones types will. And then they will share that and martyr status should prove effective in rebuilding the lines of communication, only now we have doubt.

Who is next and why?

How do people know the difference between corruption and abuse of power / money from a genuine act to manage criminal speech properly?

What of things like profanity?

More basically, how do people understand they are having a real conversation as opposed to just a permitted, even encouraged one?

I want the real conversation and will seek that. Got no time for Disneyland type conversation.


I feel as though the "sunlight is the best disinfectant" argument has taken a beating in recent times, and it's not difficult to see why.

Take the example of QAnon. It's a movement based around utter nonsense conspiracy theory. It is debunked on a daily basis. It still has a lot of followers.

Compare to Milo Yiannopoulos. I remember around the time his social media accounts were banned that people said doing so was pointless, and that banning him would only increase his popularity by making him look like an outlaw. Yet his profile has sunk almost entirely.


I feel like there's an elephant in the living room nobody is seeing, and that's general loss of trust in our institutions. These kinds of ideas have more traction today than they did 20, 50, or 100 years ago because people trust their government and their public institutions less today than they did back then.

IMHO a certain amount of this calling for censorship is people from those self-same institutions refusing to look at their own role in undermining the trust basis of our society. How many mainstream journalists bothered to check on those Iraq WMD claims or investigate the realities behind the 2008 financial collapse? How many politicians?

Trust is hard to earn and easy to squander. When it is lost it leaves a vacuum. If nobody is trusted the vacuum gets filled by whatever random nonsense sounds superficially good to someone.

To me the popularity of crazy populist demagogues is entirely explainable as a response to several large scale and very notable cases where our institutions have burned their credibility. If something isn't done to correct this and start re-earning trust I predict that this situation will only worsen and we'll be facing full-blown fascism shortly.

Censorship does not re-earn trust. In fact it does the opposite. Right now hordes of people out there are saying "see! Alex Jones must be onto something or why would they go to the trouble of banning him?!?" Microsoft's threat to ban Gab is the kind of PR money can't buy. I bet their traffic numbers will skyrocket.


Just curious - what are the notable cases that you're referring to which burned credibility of our institutions? By "our", I assume you mean NATO countries (or even just the USA).


Off the top of my head:

- banking crisis and the complete lack of judicial redress

- the WMD lies for the second Iraq war and the lack of judicial response for the perrpetrators

- tobacco/health disinformation campaigns and the lack of judicial response

- widening income disparity and the lack of political will/power to reverse it

- climate change disinformation and the lack of any meaningful political action

- systematic tax dodging via tax havens and the lack of political response

- militarization of police

In general, the fact that for-profit companies consistently succeed in undermining the ability for society to correct them (by "financing" politicians, by disinforming the public, or by simply cheating around the regulations).

One of the reasons that the GMO and anti-vax discussions can fester on is because organizations like the EPA and the FDA are viewed as completely corrupt; the authorities simply have no credible authority left.


> the authorities simply have no credible authority left.

But it's the Russians!

I do think there was a Russian bot campaign. It was kind of obvious. But the rush to blame the entire phenomenon on them strikes me as disingenuous.

Even worse is the desire to blame the whole thing on human cognitive shortcomings while citing likely-not-reproducible mushy soft psych studies. It's a high-brow with-citations way of dismissing people as just "stupid."


Iraq and the bank bailouts followed by absolutely no reform are two massive ones, but there had also been a death by a thousand cuts. I've lost track for example of how many nutritional recommendations have later been retracted. Those are small but they add up. Each time it's another reason to take official pronouncements with a grain of salt.

To this I would add a profound economic hollowing out. That always boosts distrust and brings out racism and xenophobia.


Trust isn't even that hard to earn (though it takes time), but insofar as one depends on intellectual sloppyness or even outright dishonesty for "legitimate" aims" (just think marketing, elections, etc.), then you cannpot attack those fallacious methods as such in those that use them for much nastier things. All that's left then is squabbling about those ends while leaving the means out of the frame, which leads nowhwere.


This has been my position as well for a long time, but I'm starting to feel like there's an unexpectedly enormous number of people out there with zero ability to critically evaluate information, and it seems as if all a terrible idea needs in order to gain critical mass is that it be emotionally satisfying and generally known.

I honestly don't know where we go from here - I continue to believe that giving the state the right to determine what speech is allowed and what's not to be far more dangerous than the alternative, and that's not likely to change. But it's almost as if we've discovered that idiocy is contagious, and the vector is the internet. Maybe it's temporary, because the internet is new - maybe society, having never dealt with a deluge of information like this before has no natural immunity to it, and it will develop over time. Or maybe we've found a loophole in human cognition that will end up destroying us.

At any rate, I can't fault a private company for trying to do something about it; it's like a bus company finding out their seats are the vector for some horrible disfiguring viral disease and removing them in an attempt to stop the epidemic.


The alt-right playbook (https://www.youtube.com/watch?v=CaPgDQkmqqM) is a great summary of common experiences with this sort of cancer online. It's next to impossible to engage with these people for rational debate, when all they care about is spreading their hatred and ignorance through the virality of the internet, rendering pointless any civil discussion by arguing in bad faith.


> I'm starting to feel like there's an unexpectedly enormous number of people out there with zero ability to critically evaluate information, and it seems as if all a terrible idea needs in order to gain critical mass is that it be emotionally satisfying and generally known.

Aristotle observed this in the 4th century BC. He wrote about it in his book Rhetoric. This is by no means a new development.


To clarify, I didn't mean that people uncritically accepting bad information was new, it was that there's now a means of getting that information to everyone with little to no effort, and the same easy access to good information doesn't seem to do much to mitigate it.


It's not in a vacuum though, other things multiply it. Radio was a phase shift and an important factor in enabling fascism along with industrialization. You can say WW2 and the Holocaust wasn't anything new because there's always been war and murder, but that's like saying a glass of water and the ocean are the same.

Now we moving towards automated analysis and manipulation of billions of people in real time, combined with incredible means to observe and control, not to even talk of weapon systems; and an astonishing obedience or naivity about it on behalf of the subjects.

"This is not new" is no excuse and of no help. You could say this about almost anything... as if we only get a chance to be critical of something and express that criticism (which is the first state of organizing change) the first time something "happens".


This is abstract idealism, uninformed by contact with the real world. Mobs happen. Moral panics happen. People are not at all, not at all, rational beings evaluating the logical correctness of arguments and using critical thinking to vet new ideas.

The common spaces have to be policed. People with bad intent have to be dealt with not left for everyone to deal with on their own. It is entirely possible to destabilize a society sufficiently for the social contract to break down simply by allowing bad actors to intentionally poison the social commons. You do not want to see that outcome if you are a non-sociopath human.


Experience seems to indicate the opposite I think. The danger of niche mobs and/or terrorism is very real and scary, but losing our freedoms in order to address these perceived dangers is always far worse. For all the fear the "alt-right" has generated, exactly how many people have been come to harm because of them? They probably number in the dozens, which is a terrible tragedy but ultimately is that worth losing our fundamental rights? I personally don't think it is.


I would argue that the alt-right panic (as in, if I thought white genocide was a thing I'd be panicking too) + nihilist troll tactics elected Trump, which has caused much harm.

I think a world where those people were no-platformed more often might be a better world, but of course I can't know.

And getting no-platformed isn't losing rights, you can always make your own platform, it gets easier and cheaper every day.


You can't really "make your own platform" though. Ultimately the ISPs could shut down access to any website they don't think their customers should have access to. The only recourse would be for the censored to somehow construct their own physical internet infrastructure, which was so difficult and costly the first time that it was highly subsidized by the government.

And do you honestly think you should have the right to determine what elections should be considered "harmful" enough to shut down others speech? There is a wide range of opinion short of the alt-right but in the same direction that could be shut down too, for instance apparently a majority of white people think there is racism against now (https://www.npr.org/2017/10/24/559604836/majority-of-white-a...). Do you think this opinion is so wrong that it should be silenced from the public forum? Do you think it would even be possible to do so?


I'll keep your points in mind.

I would only insist on making a distinction between "a public form" and "the public forum"

As an owner of a machine, I see no reason I should be forced to host some conversation I consider harmful.

OTOH I would encourage ISPs to maintain a common carrier status just out of pragmatism. Once you memory-hole connections to some IP address, I think you become responsible for the content you serve. All or Nothing.


I agree with that. We need some way of differentiating between private sites and sites that be considered part of the public forum.


The experience of 1930s Germany and Italy, 1980s Rwanda, Serbia and Srebenincia, and just recently Myanmar apparently don’t register on your scale of “problem”. Why is that? It is a very misguided position to say that can’t happen here. US society has been building toward a populist-led Fascist break, with dehumanizing rhetoric by right-wingers, whipping up fear and hatred, forming armed militias and concentrating supporters in police and military, and now a true fascist as president. Why is it that it won’t happen here?


I think eroding free speech rights, especially in a time of increasing political extremism, will only make such tragedies more possible, not less. How long until extremists in power figure out a way to use the newly acceptable restrictions on speech in their favor? What happens if there is another 9/11 under Trump when freedom of speech has been sidelined? Can Trump then pressure companies to ban Muslims and political opponents off of the internet entirely "until we can figure out what the hell is going on"? Accurately predicting how loss of freedom will affect future tyranny is like accurately predicting the stock market, it just isn't going to happen.


It is a common concept, true. It’s also wrong and not supported by any studies. The reason the idea keeps going is that it keeps getting repeated despite being comprehensively debunked, thus serving as its own contradiction.

(Those entertained by intellectual diversions will observe that if you assume that I am incorrect and that in fact the evidence supports OP, you end up with a logically inconsistent position.)


> I wait to see what will be next on the chopping block.

I wish we didn't have to wait. I wish there was a large enough group of people that would boycott companies that make up the internet infrastructure (e.g. ISPs, registrars, DNS, CDNs, cloud providers, etc) for censoring lawful content. I wish we didn't require mass media articles to get groups of consumers to act in a common interest. And I would prefer it was absolute, regardless of content, because it's frustrating to watch lots of people agree with one decision and then get shocked when these companies become more choosey on the foundation of acceptability that was previously built. To be clear though, I want no government involvement in any way (on the censoring or non-censoring side).

I for one will point an Azure sales rep (or hiring recruiter) to this as a reason I will not engage with them. I have the same feeling about CloudFlare's decision. And I really don't like either of the censored groups.


There was no government involvement in this. Microsoft made a business decision that hosting Gab would subject them to business pressures from other customers/potential customers not wanting to use a service used by hatemongers.

That's not censorship. That's the free market at work. Gab is free to host their own website; and if necessary their own DNS servers. Hell, they're free to go Tor-only.


> There was no government involvement in this

I know that, I am clearly saying I don't want it for any consumer-led efforts.

> That's not censorship

Did you mean to say it's not government censorship or do we just have different definitions of the word? Whether they can host their own site or whatever has absolutely nothing to do with what is and isn't censorship.


If I own a nightclub and it becomes a hotspot for the alt-right, who don't align with my values, and I close down that nightclub for that reason, am I censoring them? No, I'm excluding them from my privately-owned space and refusing to associate with them. Nobody's right is being infringed upon here; they're free to go to any other bar. "Whether they can host their own site" is absolutely relevant to what is and isn't censorship because all of the examples in this thread are still totally free to distribute (and even monetize) their content in public.


> am I censoring them

Yup

> Nobody's right is being infringed upon here

Of course not, who said otherwise? I just disagree wth the decision taken.

> absolutely relevant to what is and isn't censorship

You don't get to make up your own definition of the word. That someone blocks the speech is censorship, by definition, regardless of whether you can say it elsewhere and regardless of what you might want the definition of the word to be. I'm not going to condescendingly paste the definition here, but it's very clear Microsoft is engaged in censorship here.


What I’m saying is: their speech hasn’t been blocked. Microsoft is making a decision not to align with a particular demographic, not shutting them out of public discourse.


Maybe the state should provide hosting services so that the public can express itself freely without fear of being excluded.


It absolutely should, but then, I'm fundamentally in favour of more services becoming government-owned public utilities. The way this would (ideally) shake out here in Australia, though, is that given hate speech is not protected speech, the result may not be so different.


Censorship has absolutely nothing to do with the government, and I am confused as to why you think so.

Censorship is when any group at all, uses their power, any power, to attempt to silence others.


Because censors were originally Roman government officials, because the dictionary reflects this, and because censor in common usage reflects this definition.


Why do you say that? EU governments are regularly complaining about hate speech and this will win Microsoft their favor.


"business impact" is almost always a lie or used as an excuse to do what you want.


Having worked for many multinational clients as an advisor, I can say with 100% certainty that "business impact" is not "almost always a lie" and is in fact quite the opposite.

On the other hand, if you're talking about the personal business fiefdoms of billionaires, then you're quite right--they make decisions based on what the sole owner wants, financial consequences be damned.


Small minded businesses then I guess.


And if they continued hosting, I would point an Azure sales rep to these groups as a reason why I would not engage with them.


I don't think that's true.

If there is a sufficiently large group of willing listeners (or determined trolls), then having a bad idea out there in reach of that audience legitimizes it far faster than private discussion could. At which point, challenging them invites the creation of a cult rather than dispersing the idea.

For a frivolous example, my roommate is now convinced that the Star Wars prequels are legitimately good movies because of r/PrequelMemes.


But is there not a large body of work refuting that idea in the Mr. Plinkett reviews on Red Letter Media?


There are some lines you can't cross. I doubt any philosophers would truly argue that free speech is absolute, and that yelling FIRE in a crowded theater should be protected because it is possible to ridicule the speaker and calm the audience before they stampede each other to death. Malicious intent and incitement of violence are not mere speech.


The example of "yelling FIRE in a crowded theater" is maybe not as good as you think:

https://www.theatlantic.com/national/archive/2012/11/its-tim...


A common argument in philosophy is that bad ideas should occur in the open - where everyone may ridicule them. When they are cast to the shadows they can grow, under everyone's noses, in private where only there are they allowed to exist. In private, these bad ideas cannot be challenged by others and people will be convinced to believe in them, with nobody challenging the idea as a genuinely terrible idea.

I used to subscribe to this position, but now consider it wrong for several reasons.

First, ridicule of a bad idea will have that effect anyway - people who like the idea but not the ridicule will form and gather in niche communities to incubate whatever weird ideas they hold. This is not a bad thing, as such, and can often be productive of new cultural or scientific ideas.

Second, everyone is not a rational philosopher. Some people are easily tricked by fallacious arguments, and other people are willing to treat a threshold level of social proof as equivalent to logical validation. Hence the wide currency of 'skeptical' responses to climate change, the dangers of smoking, or evolution - for a suitable fee, you can find some people who are willing to rent out their reputation or credentials and you can then manufacture the appearance of scientific controversy.

Third, proponents of bad ideas tend to make bad faith arguments, so challenging a bad idea is treated as equivalent to trying to suppress it anyway. Of course, such weak arguments don't hold up very well in philosophy journals or social fora, but that's because people in those contexts generally have academic degrees or equivalent study and there are fairly well established standards of discourse. If someone consistently makes fallacious arguments or recycles debunked ones eventually they will be ignored. In broader social fora those institutional safeguards are absent, while nontrivial socioeconomic payoffs for the production of bad faith arguments are present - in other words, you can make a good living out of pretending to believe the earth is flat because there's a market for comforting lies that help to relieve people's existential anxieties, and the cost of repeating the false ideas is relatively low compared to the cost of debunking them over and over.

Also, you keep making non-sequitur arguments, going from claiming that bad ideas will fester unnoticed in private spaces to slippery-slope 'you're next' arguments within the same paragraph. The latter does not follow from the former, and in fact implies that it is not possible to distinguish between good and bad ideas and that if bad ones are driven to the margins good ones will suffer the same fate despite their merits.


I'd love to see an analysis of if this theory is true. There's another argument for allowing hateful speech which is that it is a containment strategy. It appears that there isn't much to that theory. An analysis of reddit's 2015 banning on r/fatpeoplehate and r/coontown did not see the hateful speech increase where 'migrants' went.

http://comp.social.gatech.edu/papers/cscw18-chand-hate.pdf


I think it's because such banning made it abundantly clear that such behavior was not welcome. So that's more to do with social norms, I guess.


Why do you think that?


On the other hand, there's no principle that requires an entity like Microsoft to support services it or its customers find repugnant.


What do you think of a bakery refusing to make a wedding cake for people they disapprove of?


Sounds perfectly fine by me. Of course they can be named&shamed, but legally I think you should be able to pick your customers (except in monopoly and some other special cases).


That is, I'm afraid, against the law in this country.


This is a poor point of comparison, since baking the cake doesn't put anyone at risk, whereas an exhortation to commit violence against specific people does.


It's disappointing this even needs to be pointed out, that simply existing while gay is not the same thing as proliferating hate speech.


I'm LGBT and this is exactly why I make such a fuss about selective censorship every time it comes up. Everyone always assumes it will only be done for things "they agree" with and never flips the coin over. They see it as justified, but only when it agrees with their morals because anything that doesn't agree with their morals is "obviously wrong" and "doesn't need to be pointed out what the difference is".

That difference simply doesn't exist in many places - because "that difference" is whatever popular opinion deems it to be at the current time and place.

In many countries simply existing as I am is worse than hate speech. Because someone decided, usually due to some religious bullshit or another, that being LGBT is bad and punishable by death. Speaking publicly about it is simply not allowed. If you do not wish to be ostracized, or worse, you keep your mouth shut. Being "pro-lgbt" is not a popular public opinion in every country. It's seen as immoral, corrupting the youth, etc.. Even within countries with majority positive opinion - there are large groups of people or even areas that are still against LGBT people.

Eventually someone needs to be the one who decides what is "good" and what is "not good" and what the public gets to see and hear. To whom would you give the job to decide what you can read? How about what you can say? To save you the responsibility of thinking for yourself? What if you don't get to choose who?

Whenever you give the power of censorship to a group or person - imagine if someone you disagree with on every subject had the power to censor what you read and say. If you are not completely okay with even your worst enemy being your censor, then do not give the power of censorship away. Society has a "not in my country" attitude that recent history has shown can oh-so-quickly fall apart.

Making a cake for a gay couple, in some countries, could put your life at risk. Be thankful you live in a country that recognizes the rights and freedoms of LGBT people.


Thank you for contributing your perspective. I hear you. I don't have an easy answer; I can see the pendulum swinging the other way too, and I do worry about the precedent we might be setting. And I am thankful to live in a country that's (relatively) civil regarding LGBT freedoms.

But to return to the cake comparison: as an LGBT person as well, I still get screamed at on the street and harassed in public. Two nights ago I was walking with my same-sex partner and was followed down the street by a guy muttering, and then shouting, about "faggots" etc. He started catching up to us and we turned around, and he grabbed my partner by the neck, and we had to fight him off. But let's disregard the assault for a second and say he just followed us for a while, intimidating us, but technically within the confines of the law. Wouldn't you agree one side of this deserves to be stopped? To be silenced? I think it's an overly broad generalisation to say "Everyone always assumes it'll only work in their favour." I'm saying right here: I know that's not the case. But by the same token, trying to apply objective metrics to freedom of speech just as quickly justifies "I'm free to disagree with the way LGBT people live their lives" as much as "I'm free to walk in public without being afraid of assault." It's more nuanced than that. It isn't free or not-free. And I'd argue it's worth some people giving up certain freedoms (the freedom to denigrate other people based on their sexuality, for example) so others can increase theirs (like the freedom to walk in public without fear.)

So I'm glad I can get my cake, but if it means we're expected to shut up because we're afraid someone's going to roundhouse kick that cake out of our hands as soon as we walk out of the shop, that speech doesn't seem exceptionally free.


It's the double standard/selective enforcement by these corporations that most people have the biggest problem with.

There are high profile "violence-inciting" people[0] that are still on Facebook et al.

Those classified by SPLC as hate groups. Google is known to use SPLC's assistance in policing Youtube.

[0]https://en.wikipedia.org/wiki/Louis_Farrakhan#Allegations_of...

This man sounds much worse than whatever bullshit Alex Jones was peddling.

They have the right to run their businesses in any legal way they see fit, but they most definitely should not be doing any grandstanding like the Cloudflare CEO did, acting as the holier-than-thou defenders of the public, while plenty of even worse actors are still cozy on their platforms.


It's also a poor point because being gay isn't a choice and choosing to post neo-Nazi rhetoric is.


A lot of people conflate that situation with First Amendment rights. It actually has very little to do with the First Amendment, rather it should be covered under the Civil Rights Act.

The First Amendment only stops the government from restricting speech and expression; the government can't force the bakery nor the gay customer to say or not say a specific message. Likewise, if the bakery wants to refuse service to someone, they can't claim 1A protection as a cover for that refusal because it simply doesn't apply. The 1A allows them to not be charged and prosecuted for speaking out against homosexuality, but that's it.

However, under the Civil Rights Act, they cannot refuse service to a protected class. Where it gets murky is that homosexuality is not explicitly defined as a protected class, only sex (male or female at the time of enactment). Today though, we are learning that "male" and "female" are not the only recognized sexes and there are non-binary definitions coming out all the time. The question becomes, is homosexuality a different sex as it pertains to the CRA?

I don't know the answer to that question. The way I see it though, if you are proclaiming to be a Christian yet you use that proclamation to discriminate against a particular class or individual, you are going against the teachings of Christ. According to Christian lore, he went out of his way to help the lowest of the low in his society, even those who were considered immoral or sinful by that society (prostitutes, lepers, thieves, etc). I'm not overtly religious (I was raised Baptist but I reject a lot of that sect's philosophies), but when I hand spare change to a homeless person on the street, I don't ask for sexual orientation before doing so. I see someone in need of assistance and I help where I can, because who they are doesn't matter, it's what they need that matters.

By the same token, I don't feel a business owner should care what someone's lifestyle or orientation is no matter their own beliefs, but in this nation it seems the Religious Right feels empowered to use their take on religion as a crutch to be cruel and dismissive towards those who are not perfectly in line with their own beliefs. Yes, they have a right to refuse service in certain cases, but when doing so steps on another person's right to even exist as they are, I feel it crosses a line that should not be crossed.

To put it another way: a gay couple's existence is not an aggression towards a homophobic person; there is no harm intended nor is there harm done by simply existing. But, the homophobe's negative actions towards that gay couple is an aggression.


Conservatives generally want the bakery to be able to refuse to bake the cake. Why aren't they also in favor of the ISP refusing to offer service, or the host refusing to offer use of their servers?

Do you believe a baker ought to be able to refuse service to someone over religion? If so, do you believe a baker whose beliefs condemn homosexuality should be able to refuse a same-sex couple? Do you believe a baker whose religion condemns Christianity should be able to refuse Christians?

If so, do you have a problem with a hosting service cutting off a site of which it disapproves?


> Conservatives generally want the bakery to be able to refuse to bake the cake. Why aren't they also in favor of the ISP refusing to offer service, or the host refusing to offer use of their servers?

It’s a very different scenario.

It’s more like people are hiring private investigators to follow gay people around and get a list of businesses they frequent and then organising harassment of those businesses until they refuse to do business with the homosexuals.

It’s orders of magnitude more creepy.


There's a line in one of Terry Pratchett's books where Sam Vimes (commander of the City Watch) muses that what he's doing isn't really secret surveillance because he has to stand back a bit to avoid being deafened by how loudly some guy is yelling his jingoistic crap in a public place.

Gab isn't some stealthy hidden dark-web thing that you can only access from behind seven proxies after a thirty-step initiation process. Same with Alex Jones. Nobody's breaking into their homes and secretly recording statements they'd never ever make in public; these folks are getting up on soapboxes and gleefully shouting at the top of their lungs in the explicit hope that as many people as possible will hear and know about them.

So, yeah, it's 100% fair for people to point to the person standing on a soapbox in the public square who's screaming at the top of his lungs, and say "I don't want to be associated with him, or with anyone who supports him". That's freedom, and the guy doing the screaming has no grounds to try to forbid that; that would be censorship of others' opinions.


I think any business should have the freedom to refuse to trade with anyone they dislike, however irrational or bigoted their decision may be. I also think everybody else can denounce, ostracize, or boycott them for similar, or rational and non-bigoted, inclinations.

Nowadays I do wonder about how much integrity is involved in our post-modern social strife. While I like to think I'd sell on principle my bread to anyone, regardless of behaviour or creed -- there are some awful people with whom I think, honestly, I'd rather have nothing to do with if I know what they've done (a child murderer or holocaust-supporter or rapist, or someone who defends them, to go to extremes).

Banishment is a social tool that has been in use for a long time (forever?), sometimes with very good reasons, probably as a last recourse; I'll feel better when we have deliberate and explicit criteria for this method.

ps: I'm neither a "conservative" nor a "progressive" -- though I uphold thoughts that may be labelled by others as one or the other.


That equating homosexuality with being an alt-right troll is completely disingenuous to the point of being insulting.


If a given business is forbidden from discriminating for any aspect of their potential customer's life, then this rule has to be applied across the board, regardless of what you happen to approve or disapprove of.


Depends on the reason for disapproval, and how common that same reason is in that particular market.


That isn't necessary supported by evidence nor is it the most sophisticated view in contemporary philosophy.

Good evidence can't be trusted to cause change in mind, see for example the classic paper "Biased Assimilation and Attitude Polarization: The Effects of Prior Theories on Subsequently Considered Evidence (https://www.unc.edu/~fbaum/teaching/articles/jpsp-1979-Lord-...).


It’s certainly possible the people working at Microsoft believe they have a moral duty to keep hate speech like that off their platform. I know that if I worked at a platform company and there were folks advocating physical harm to anyone on that platform I’d been pretty keen to remove them.

I don’t think free speech means a platform company has some imperative to offer a level playing field to racists.


> A common argument in philosophy is that bad ideas should occur in the open - where everyone may ridicule them

Problem is that on internet there is no open commons where the bad ideas can occur; instead they occur generally always on someones service on some level. This leads to the conflicting freedoms of the service providers and their clients.


> When they are cast to the shadows they can grow,

It's a nice poetic conceit to think of good ideas as thriving in the light like flowers while bad ideas flourish in the darkness like mushrooms, but they both thrive on the same thing -- attention. To make another comparison with things that grow in the shadows: the crawlspace under my front porch is full of bugs and critters, but they didn't get there because I cast them out of my house, and I don't keep my house bug-free by periodically inviting them in for debates.


Not much is actually getting censored. The bad stuff just gets moved to less respectable sites. The Stormer for example is still up there if you want to read it.


Is this an argument in favor of the Streisand effect? After all, most of us wouldn't have known about these posts if Microsoft hadn't asked them to deleted.

Or is it an argument that there is not enough social shaming on the Internet? Since that's what we're likely to get instead of reasoned debate.

Maybe the old arguments in favor of debating bad ideas need to rethought to account there are so many people posting them, and they don't necessarily deserve attention. It's not 1995 anymore; we know that attention is valuable, and it's important to have efficient defenses against denial-of-service attacks due to spam, trolling, viral post-sharing, and so on. Debating every post of the same old outrage isn't efficient.

For bad ideas that aren't new, something like Snopes makes a bit more sense. And I'll point to the Ask Historians subreddit for a good example of how to run a high-quality Internet forum.


> A common argument in philosophy is that bad ideas should occur in the open - where everyone may ridicule them.

Then let them occur in the open. I welcome them to the nearest soapbox.

It's not clear at all, to this student of philosophy, why neo-Nazis are entitled to time on Microsoft's servers.


These aren't merely "bad ideas". It's hate speech advocating for violence against a historically oppressed group. There is no slippery slope; we know what hate speech and anti-semitism are, and those are the rules being applied here.


  A common argument in philosophy is that bad ideas should 
  occur in the open - where everyone may ridicule them.
Unless people with those bad ideas find communities that accept those bad ideas. This is what happens with content bubbles and niche extremist communities.


"In one post, Little said he would livestream himself destroying an unspecified Holocaust memorial in the U.S."

Ugh. Did you even bother to read the article? This has absolutely nothing to do with "bad ideas." MSFT asked Gab to remove content that "incites violence." MSFT is not cracking down on anyone because they have some disagreement with their public policy positions or political philosophy.

Yelling "fire" in a crowded theater is not an "bad idea" it's deliberately putting people in immediate danger. Likewise advocating that we physically harm people or commit acts of terrorism against specific locations are not "bad ideas," they're advocating specific actions that will physically harm other people.

Obviously, there's no censorship of edgy and extreme ideas by MSFT, if MSFT wanted to do that they would not be hosting Gab to begin with.

Instead, all they're doing is cracking down on content which incites violence, which is in violation of their terms of service, and is absolutely not protected by the First Amendment.

Finally, can you cite any evidence from the psychological literature showing that shining light on bad ideas changes anyone's mind? Just about everything I've read has suggested that's not the case.


Inciting violence is not free speech, just like yelling "fire!" in a crowded theater is not free speech.

The article clearly states that the removed comments incite violence:

> "Microsoft received a complaint about specific posts on Gab.ai that advocate ‘ritual death by torture’ and the ‘complete eradication’ of all Jews. After an initial review, we have concluded that this content incites violence, is not protected by the First Amendment, and violates Microsoft Azure’s acceptable use policy," Microsoft said in a statement to The Hill.

Are you suggesting that this type of language should be protected by free speech laws?


In the USA, it is only direct threats of imminent violence that is illegal.

So yes, according to the supreme Court, those posts are probably legal.

It would only be things like "everyone, let meet up at this specific person's house tonight and attack them" that would be illegal.


And even then, only if it is deemed likely that that speech would actually incite people to do it.


Good points. However, Microsoft's Acceptable Use policy may add other restrictions. Context isn't the only important thing but it does matter, and there is a context - both historical and contemporary - around the deleted comments in question.

And of course someone could always take MS to court if they feel their First Amendment rights have been violated. The ACLU has supported the freedom of speech of the KKK and Neo-Nazis plenty of times, so it's not as if there aren't resources.

Also, while IANAL, Wikipedia says, with regards to the "Brandenburg Test" used to determine "imminent lawless action", there is a "bad tendency" test which may also apply:

"In Schenck v. United States[5] the Court had adopted a "clear and present danger" test that Whitney v. California subsequently expanded to a "bad tendency" test: if speech has a "tendency" to cause sedition or lawlessness, it may constitutionally be prohibited. Dennis v. United States, a case dealing with prosecution of alleged Communists under the Smith Act for advocating the overthrow of the government, used the clear and present danger test while still upholding the defendants' convictions for acts that could not possibly have led to a speedy overthrow of the government."

https://en.wikipedia.org/wiki/Brandenburg_v._Ohio

It would be great if a lawyer could weight in.


Not sure how old is that common argument, but they weren't aware of how it was internet social media in the last 5-10 years. Specially counting the well funded campaigns promoting things like climate change denialism, among others.

Big hammer censoring is not the solution, as it enables censoring the truth too (that may threaten, incomodate, etc the one handling the big hammer), but sometimes respecting the freedom of the ones that don't respect the same right for others is not a wise policy.


I guess that's not the problem here.

Problem is combining ignorance with confirmation bias and then multipling it by distribution engines like FB, Youtube, Podcast etc....

unfortunately information is NOT absolute. it's asymmetrical and heavily influenced by our limited world view, ignorance and confirmation biases.... allowing bad agents to take advantage of this fact and then allowing them to use the massive distribution channels like FB etc. is recipe for unrest.


Has Microsoft identified similar posts on Gab in the past, and requested/obtained their removal -- or else? (is Gab acting as a sudden victim?)

If not, how is that possible -- since Gab was created to explicitly tolerate pretty much any kind of barely-legal speech under pressure in Twitter? are we to believe that this style of expression is completely new to Gab? (or did Microsoft just now notice?)


What about good ideas that are squashed by corporations, and are never known about by most of the public, because it might threaten their profits?

This includes corporate influence on governments, so that the politicians act as their servants in also squashing said good ideas.

Not sure what some of these ideas might be, off the top of my head right now, but I know that some have to exist.


>A common argument in philosophy is that bad ideas should occur in the open

Yes we all know about that theory, but what we haven seen happening in the last 10 years as the internet has become populated by the masses seems to suggest that theory is very wrong.


On a more pragmatic note, it's much harder to track these groups as well on the dark web. On the internet, anyone can see their thoughts and it's much easier to track their location as well.


Another argument against prohibiting speech on some platform is that it will push people platforms without any speech restriction, radicalasing their view.


Which is what Gab was supposed to be, so that ship has basically sailed.


I'm shocked by how many liberals who really ought to know better are cheering for the deployment of corporate oligopolistic censorship at this scale. We're not talking about just kicking someone off Twitter... now we're talking about kicking them off supposedly neutral public cloud platforms.

This is a major escalation and honestly it's changing my mind a bit about the whole issue. I was a fence sitter before but now I'm siding with the libertarian crowd on this one, and I say this as someone who is nowhere near the alt-right politically.

We're going from what amounts to forum policing (though on huge scale quasi-public forums) to infrastructure level censorship. No longer can you just "go set up your own site." Even more significantly we are seeing an incredible amount of coordination between corporations on this, proving that competition and diversity in the marketplace is not sufficient to protect the openness of the Internet as a system.

Once again: liberals really should know better, especially any that care about net neutrality. This is a really extreme example of ISP traffic discrimination and it sets a precedent that this kind of thing is okay. A top level infrastructure provider blackballing a site for speech should be a third rail regardless of the content in question.

Censors always start with the least popular ideas and speakers. They do that for a reason: they want to gauge what they can get away with and they want to shift the Overton window toward increasing support for censorship. They know that few people will overtly stand up to defend Nazis, trolls, and blithering red faced demagogues, so that's where they start. Other popular targets in the past included pornographers, lewd writers and musicians, and religious blasphemers.

Two thoughts that ought to keep you up at night even if you are inclined to agree with these moves:

(1) What happens if/when the political winds shift and someone like Donald Trump or maybe someone even to the right of him ends up in control of these powers? Who gets silenced then? A power once created becomes a political entity that can easily be handed off.

(2) What happens if/when some liberal cause -- like a reboot of unionization and a new labor movement for example -- really threatens corporate and Wall St. profits at a large scale? Will Neo-unionists get this deplatformed and shadow banned now that it's been legitimized? Will anyone even notice?

Again... we are setting a precedent here that this is okay. You really all ought to know better. The reference metric for freedom of speech is the freedom enjoyed by the most offensive and least popular speakers: political fanatics, pornographers, demagogues, hate-mongers, etc. If they are free to speak then you are free to speak. If they're not then the process of clamp-down has started and you are next.

There's a reason the ACLU has in the past defended neo-Nazis and other unpopular speakers, and it's not because they support these speakers' messages. It's because you need a canary in the coal mine.

Edit:

The conspiracy nut devil that hangs out on my shoulder whispers that this combined with the abolition of net neutrality is a coordinated campaign. Use massively unpopular triggering demagogues to get the left to abandon its commitment to free speech while simultaneously using fallacious libertarian arguments to get the right to eliminate net neutrality. Put those together and you have a Great Firewall of America and enough public support from a broad enough subset of Americans to deploy it.

(The libertarian arguments against NN are fallacious because ISPs are government backed and sometimes even funded monopolies.)

The next ratcheting up would be for peering points to refuse to peer Internet traffic that they don't like, and abolition of net neutrality would allow that. That means Gab (or tomorrow moveon.org, who knows) couldn't even host at an indie hosting provider or overseas.

But that's crazy talk right?

If sites like Gab get banned from being hosted the next step for them will be to move to decentralized platforms or Tor. If the above is true I predict that this combined with the words or deeds of a few nutjobs will be used to start convincing both liberals and libertarian-minded conservatives to start supporting bans on un-escrowed encryption as well as ISP and cloud provider efforts to block "horizontal" network traffic. This is how you'll be convinced to support a ban on P2P protocols and privacy technologies.


> Censors always start with the least popular ideas and speakers. They do that for a reason: they want to gauge what they can get away with and they want to shift the Overton window toward increasing support for censorship.

Every time a neo-Nazi gets shut down, people trot out the "First they came for the communists…" — and yet the next time, it's a neo-Nazi again, every time. It would be bad if people who aren't promoting harm start getting shut down, but I don't see any evidence that current events will lead to that.

> (1) What happens if/when the political winds shift and someone like Donald Trump or maybe someone even to the right of him ends up in control of these powers? Who gets silenced then? A power once created becomes a political entity that can easily be handed off.

We're not talking about the creation of a new power here. This is a power that already exists, and bad actors' ability to use it isn't based on whether or not it's used for good.

> What happens if/when some liberal cause -- like a reboot of unionization and a new labor movement for example -- really threatens corporate and Wall St. profits at a large scale? Will Neo-unionists get this deplatformed and shadow banned now that it's been legitimized? Will anyone even notice?

Again, there is no causal line between "Neo-Nazis get deplatformed" and "Corporations deplatform their critics." You can have either one without the other.

In general, your fears seem to be based on an assumption that looks something like "Bad people won't do anything to good people that good people don't do to them." But history has repeatedly shown that this is not the case. Our refusal to take the threat of Nazism seriously the first time didn't in any way constrain the things they did to people.


You have to take a long view of this. Whatever powers we cede to the state or corporations now will remain long after nobody remembers Alex Jones or that Nazi punching bag (forget his name already!)

After 9/11 there was a mad rush to cede civil liberties and deploy surveillance. Many liberals and libertarians pointed out quite correctly that terrorism remained even after 9/11 a fringe activity and a statistically insignificant risk. Was it really worth it to toss the bill of rights (and the Geneva convention) for a risk that didn't even rank when compared with routine low probability accidents?

The same logic applies now. The Nazis are loud and obnoxious but few in number, and those hurt by these clowns number far less than the dead from Oklahoma City or 9/11.

Open discourse, free speech, and net neutrality are more valuable than this.

Gave you an upvote since I don't think what you wrote was that unreasonable. I just think it's short sighted.


I just wanted to pop in and say that I agree 100%.

Good post.


>There's a reason the ACLU has in the past defended neo-Nazis and other unpopular speakers, and it's not because they support these speakers' messages. It's because you need a canary in the coal mine.

People aren't mentioning this, but this has already happened.

I'm not white, just to be clear, however back sometime last year I knew infrastructure-level censorship was coming when DNS services, GoDaddy in particular, started censoring sites. The canary in the coalmine was the Daily Stormer.

It's one thing to want them to be quiet. It's another thing to set the precedent of infrastructure forcing them to be quiet. As you said, it sets a precedent.

EDIT: DNS censorship has indeed been mentioned in this thread.


Except that the Daily Stormer is still online. As long as net neutrality exists (in practice if not in law) there is no "infrastructure-level censorship" to speak of beyond a slight lack of convenience. What is the worst-case outcome of DNS registrars refusing to do business with nazis? People who want to visit nazi websites would have to type in numerical IP addresses.

The only "infrastructure-level censorship" we should worry about is the Internet itself. As long as you can set up your own servers and host your own services, your free speech rights are not really in jeopardy. Free speech does not mean you have the right to buy the megaphone of your choosing; it does mean that you can voice your opinion using a megaphone, even if you had to assemble the megaphone yourself.


All true. And yet still we have to grapple with the reality that our public square is in the private sector.


Good post and agree with you.

>libertarian crowd on this one, and I say this as someone who is nowhere near the alt-right politically.

This is one thing that concerns me is people who stand up for this or lean libertarian being seen as alt-right simply because they disagree with this corporate censorship. To me this censorship or suppression goes against the founding principles of the internet. I don't understand why it is acceptable to label everyone as such.


Censorship of legal speech isn't the answer, ever. I can't wait for SV to find it's compass again and return to the love of free speech. That said, I do think we need a solution to have our open platforms where people can avoid certain things they find toxic. I don't want to read ideas from certain people, so letting the end user decide is the best solution.


> A common argument in philosophy is that bad ideas should occur in the open - where everyone may ridicule them.

Also, so that good ideas can have their day in the sun. Lets not forget that most "good" ideas were once considered bad ideas.

List of ideas that was once considered bad.

Abolitionism. LGBT rights. Heliocentrism. Germ theory. Endless list.

The idea is that all ideas deserve discussion so that the good rises and the bad sinks.

Also, by allowing censorship when you have likeminded people in power, you also allow other people to censor in the future when power shifts.

This is just basic stuff you learn in philosophy 101. Unfortunately, we don't seem to be teaching basic principles in high school/college anymore.

People naively think that censorship keeps the bad ideas away. Most of the time, it is used to keep the good ideas away. Look at the censorship by the catholic church, soviet union, china, nazi germany, etc.

If your ideas have solid footing, you would be against censorship. It's when you can't defend your ideas that people advocate for censorship.


There is a big difference between ideas like LGBT/PoC/Women rights, abolitionism, ... and White Supremacy, Racism, Religious Fundamentalism, ... : One advocates for inclusion, social peace and acceptance, the other for exclusion, hate and race/religion war.

This is not something we can ignore and treat every ideas like they are the same.

In Europe, it's very common to have an exception to Free Speech when it comes to hate speech, and I never saw it being used in the "wrong" way and keep good idea away.


> There is a big difference between ideas like LGBT/PoC/Women rights, abolitionism, ... and White Supremacy, Racism, Religious Fundamentalism, ... : One advocates for inclusion, social peace and acceptance, the other for exclusion, hate and race/religion war.

Ideas are ideas. And how do we understand the difference if one is censored? Also, there are plenty of exclusionism and hate in extremist LGBT, feminism, etc communities. It's not fair to compare extremism of one side, but not the extremism of the other side. There are feminists who believe men should be wiped out, there are lesbians who are against trans women and there are trans people who are against gay marriage. But that's not the point. The point is all ideas should be allowed to be expressed so that they can be debated, criticized, etc. It's the american way.

> In Europe, it's very common to have an exception to Free Speech when it comes to hate speech

Yes. That's why europe gave us such "wonderful" things like nazi germany, the soviet union, fascist italy, etc.

Do you know why these existed while US was immune from them? Nazi Germany censored everything they didn't like, soviet union censored everything they didn't like, fascist italy censored everything they didn't like. But we had freedom of speech. It's why the US is the symbol of freedom and not europe in the world.

You do realize that in much of western europe, if you criticize islamic fundamentalism or fundamental judaism, it's considered hate speech and you could go to prison for that right? Hell in some places in the world, criticizing religious fundamentalism is an offense worthy of the death penalty.

You sure you want to censor all hate or offensive speech? Because atheist speech and lgbt speech is the most censored speech in the world. Most of the world finds these type of speech the most hateful and offensive. Are you really advocating that we ban these speech since it offends so many?

How about we stick to principles?


>Actions like this serve to placate the public.

I'm not placated. According to Microsoft,

>After an initial review, we have concluded that this content incites violence, is not protected by the First Amendment, and violates Microsoft Azure’s acceptable use policy

If the posts were illegal, why isn't the poster being arrested? Where are the real police in this matter? Why are California cops deferring action to the Redmond internet police?


This sort of thing illustrates the biggest issue with web hosting, domain registrars and the internet's infrastructure in general; everything is privately owned by companies who get unlimited rights to decide who and what they want to host.

The solution is to require all hosting and domain companies to act as utilities, and require neutrality in regards to any content that's legal. They're not private forums or homes, they're the internet equivalent to the electricity company or the water company. They're the internet equivalent to a phone service provider or ISP.

Why do/did ISPs have to deal with net neutrality while large hosting companies and registrars get free reign? The power company can't cut off your service because you offended someone online or what not.

If you want to argue the internet is a place for free speech, then it needs places you can host content/register domains/whatever that act like a public square, not a shopping centre.


The phone company would be a better analogy, and they absolutely can cut off your service if you’re being a nuisance. The first ammendment says your government won’t pass laws against you saying stuff, it stops you from going to jail for the things you say, and it’s there to protect the people’s right to criticise the government.

It is not there to force companies to serve you if they don’t want to.


Okay, so sticking with that analogy, can your phone company cut off your service if you're making racist comments or discussing unpopular political ideas with your friends over the phone? Or, perhaps more importantly, _should_ they be able to?


Please quit calling "exterminating the jews" an unpopular political opinion.


I think the term makes sense, because that's what it is, and that's the reason why people want to shut it down. It also brings into focus the main point of: where do you draw the line? When is a political opinion unpopular enough to warrant shutting it down?


The difference is incitement to violence.


But that's not really the difference, since there is plenty of incitement to violence that is politically palatable.


Phone conversations are private.


What makes them private? They're operated on the phone company's lines and by talking on their lines you're compelling them to carry certain speech. The logic that states private communication carriers have an unlimited right to ban what they want to on their networks leads to the conclusion that "private" phone calls can be censored.


> What makes them private?

The law. Wire-tapping isn't legal.


It isn't wire-tapping if they block you from accessing the wire to begin with. Forcing the phone companies to carry your speech over their wires is compelling speech across their property which should be against their 1A rights if you're correct. However, it clearly has been ruled not to violate their rights.


Yes it is.

I mean, there's a law against it. But there is a lot of known warrent-less wire-tapping, both by govt and industry, and everyone shrugs.

And it's not even a de jure vs de facto thing, since by now a lot of the ostensibly illegal stuff has been either defended successfully in courts or buttressed by the govt.


And what if they weren't?

Would you have a problem with a phone company listening into your conversations and determining if you said something bad or not?


If you broke the law by saying something, yes. Incitement to violence is not protected free speech.


That's not quite the definition. But suppose some piece of speech is not protected: that doesn't necessarily make it illegal, just that it is legal for the state to censor you.


This isn't about illegal speech.

This is about perfectly legal speech that does not fall under the definition of incitement.

We have the police for handling illegal incitement to violence.


> The solution is to require all hosting and domain companies to act as utilities, and require neutrality in regards to any content that's legal. They're not private forums or homes, they're the internet equivalent to the electricity company or the water company. They're the internet equivalent to a phone service provider or ISP.

Exactly this. When the First Amendment was authored, I don't think it was even conceivable that any organization besides a government or state church could effectively censor speech at scale. Now that we have "private" organizations that effectively have that power, it's perhaps time that the law be updated to reflect that change in facts and preserve people's right to free expression.

The internet shouldn't be another Zuccotti Park.


When the First Amendment was authored, I suspect people thought public property was going to be where the largest audience was, with the town hall and park and square and what not being the main platform for discussion. Reaching the crowd meant going out with a handbell or standing on a box passing out pamphlets. Stopping the government from arresting someone advertising in the town square was more important than stopping a shop from kicking out a customer they didn't like.

But that's not the case now. The platforms hosting your work are privately owned, the large community websites/social networks/whatever with millions or billions of users are privately owned, and a larger portion of real life locations people frequent are privately owned too.

The way society is going seems to be making the amendment less and less relevant as time goes on.


>The way society is going seems to be making the amendment less and less relevant as time goes on.

Exactly. As society moves more and more online, it moves more and more into a virtual territory that is owned at every level by corporate entities who grant no rights to their users. If we're not careful, we could end up with a strange cyber-feudal arrangement.

The "it's their platform" argument seems strange to me when you start getting into the question of corporations beyond a certain size or monopoly. In the 1700s I'm sure there was a lot of arguing that certain kinds of protest were groundless because this is "the King's land." It took a re-examination of what the boundaries should be before that changed.


Sorry, but you seem to be confusing "the right to speak" with "the right to be heard." Your right to speak freely is not lost simply because the town square is empty and everyone is inside private clubs that will not allow you in.


Some of the same people who voted for the First Amendment very shortly thereafter authored the Alien and Sedition Acts.

It was certainly much more expensive to distribute writing in 1800 than it is today. It would be much easier to stop a person from printing.


> Why do/did ISPs have to deal with net neutrality while large hosting companies and registrars get free reign?

Because ISPs are, because of infrastructure (access to and across property, or access to scarce spectrum) requirements, naturally sharply limited, but anyone who connects to an ISP can host content.


Anyone with a server can host content in a technical sense, but can it ever scale without playing ideologically with the big players?

One tactic for deplatforming undesirable sites is a DDoS -- if you put your home server online, what options do you have for DDoS mitigation if the big players in that space decide they're on the political side of those doing the DDoS?


> Anyone with a server can host content in a technical sense, but can it ever scale without playing ideologically with the big players?

For a wide reach via the press, you may need to either buy more (or higher capacity) presses or convince those who already have then of the value of devoting them to your ideas.

That's inherent in freedom of the press and the marketplace of ideas.

You aren't entitled to someone else's platform.


Perhaps I was unclear. When I said "scale," I meant it specifically in the context of being able to weather DDoS attacks.

By way of analogy, consider you are the owner of a small independent city paper -- the sort you see offered for free in boxes on streetcorners.

Now imagine a large corporate entity -- perhaps a competing paper or someone with a personal grudge or a political mission. Every day, they go to all the locations where the paper is offered, take every copy, and destroy them so that no one else can read them. You spend the money you have to put it on more streetcorners, but your rival has deeper pockets than you and can hire people to go to all the new locations and take and destroy those papers too.

Has there or has there not been any violation of your rights?


> Has there or has there not been any violation of your rights?

By the attacker? Sure, that's a straightforward violation of property rights.

By a private third party better able to afford to, say, hire guards to protect against that attack for not choosing to allow you to use their resources for that purpose? No.

The fact that people who might commit an attack exist does not give you a claim by right on other private party’s resources as an anticipatory defense.

You might have a policy argument that government should provide public, equal access, content neutral resources for that purpose, but that is a claim of on-balance public benefit, not one to which there is an inherent right by any common conception of which I am aware.


"Perhaps I was unclear. When I said "scale," I meant it specifically in the context of being able to weather DDoS attacks."

The right to speak is not the right to not be shouted down.


Are people still able to download a web server and host their own content? What does anyone need a host for? If a bunch of angry racists want to start their own social network, they could build it, if they really wanted a place to spew their hate. Is anyone really stopping them from connecting to the internet?


Is that the point?

There are a lot of people here completely in support of this, and it's not hard to trace an IP address to an ISP. When brigades start calling for ISPs to stop serving these sites, do you really think that's when they'll stand up for free speech?

Keep in mind in the span of just a few days, we've seen the jump from application platforms (Facebook, Google) to now infrastructure providers. The leap to ISPs is not that far, and something tells me when it happens, supporters will simply move the bar again.


We already have organized brigades demanding ISPs stop providing service to certain people. That is to say, people who spam on an industrial scale.

Do you think ISPs should instead be required to host spammers? Perhaps even to provide them extra resources to help them overcome the unfair way people's spam filters censor their valuable free speech?


I'm open to simply acknowledging the Internet is no place for free speech. My point is that ISPs are just as vulnerable to brigading as Facebook or Microsoft, so arguing that it's not a big problem because they're still free to host content on their own hardware is missing the point.

It just seems odd how rabidly pro-Net Neutrality places like this were only to make cases like yours a few months later when free speech on the net actually arrives on the chopping block. How long will it take before my ISP blocks Netflix ostensibly because of complaints of offensive content?

But besides that, it's easy to draw a pretty clear distinction between allowing someone's web site to be accessible as compared to being actively compelled to send whatever outgoing transmissions a group likes. Packets are packets, but just as we don't allow screaming "fire" in a crowded theatre, freedom of speech is still incredibly important even if not absolute.


I'd argue that the pipe -- that is, the entity which owns and controls the physical fiber/wires/etc. through which the packets flow -- should operate under a certain level of enforced neutrality, in the sense that if A and B each are customers of C paying for access to the pipe, C shouldn't decide to disallow, filter, or throttle traffic between them. C (the ISP) should only be able to choose whether or not to take on or keep a particular customer, and to charge fees according to usage. The net neutrality debate is basically over whether to allow C to double dip by refusing to provide the paid-for service unless it's paid for again (and again, and again...).

But that says nothing about whether an ISP should be forced to accept any and all people as paying customers. And it's very far from saying that particular web sites should be forced to accept any and all users and any and all behaviors from those users.

Which means you've basically failed in your quest to label people as hypocrites. Would you like to try again?

(also, you may want to learn a bit more about First Amendment jurisprudence if your go-to is "fire in a crowded theater" -- it makes you look about as well-informed as someone who genuinely insists "the internet is a series of tubes")


That you can selectively interpret net neutrality to justify ISPs arbitrarily blocking access to content is unsurprising, but nonetheless irrelevant.


Unless/until you're a customer of an ISP and hosting something on the connection they provide, there's no content.

Now, think through the logical consequences of your argument for a moment, and ask whether you really want to legally require every ISP and every hosting service and every website everywhere to be legally required to accept absolutely anyone, absolutely any behavior and absolutely any content, without exception. Do you really want to live in that world? Or are you just grasping at straws to try to maintain an argument you made in bad faith from the beginning?

(hint: it's the second one)


I have no interest in defending your straw man. I didn’t propose legislation, and the idea that net neutrality wasn’t intended to keep ISPs from arbitrarily blocking content is ridiculous.

If it helps, though, my comments are still there unedited. Maybe you’re mixing up my comments with someone else’s.


Historically if you have your own AS number it's very very hard to get kicked off the Internet. But maybe Nazis are revolting enough to make it happen.


That misses the point.

As long as there is an ISP that can be publicly pressured to block the source of content, and the source of that content can be easily differentiated (to avoid blocking other content), we're one Twitter mob away from "existing on the Internet" not being enough to avoid censorship.

And it doesn't even need to be their ISP. It could be yours or mine.


Could they? Couldn't an ISP decide they don't want to serve that hateful content over their pipes and disconnect them? Or what if the domain name registrar decides they don't want to do business with an angry racist? Or what if an upstream backbone provider decides they could get some good PR by blackholing their IP?

These days, private companies have a _lot_ of power over what speech is allowed to be heard. Perhaps even more so than the government.


The solution is not to create more laws which will give the real danger (governments) more power to use their violence.

The solution is to simply have everyone host themselves. Even a moderate home broadband connection is more than enough to serve up your personal website.

Once you start relying on third parties you've lost. That's their property, not yours. When you claim it is yours and use government violence to enforce your wishes you are doing more damage to society than what you're trying to stop.


It’s impossible to avoid relying on third parties. Somebody has to lease you the DNS name; we’ve already seen that used to punish sites. Somebody has to host your server or sell you an internet connection. Moving to Tor is the only way to protect yourself from those means of punishment, at which point you’ve lost 99.99% of any audience you might’ve had. You might as well just go to a physical newsletter.


Your ISP is also a private third party. If Microsoft rejects you why wouldn't Comcast? The 70 dollars you pay for a home connection isn't worth dealing with a twitter mob, so they'll gladly boot you.


Some might. But that's where this fight should be fought: keeping internet access as dumb pipe infrastructure. The solid base on which the internet rests.

As long as that's true people can always route around censorship in services provided remotely.

In terms of implementation notabug.io is a pretty cool (and simple for the user) way to do p2p + federated node discussion boards on the web. It's not perfect but this kind of thing paired with tor for domain resolution and transport could provide a communications platform as long as internet access is still uncensored.


I am of the assumption that in the US ISPs no longer have to deal with net neutrality. So why would large hosting companies have to?


Hosting companies never had to, because net neutrality is only about the Internet itself, not the edge services that host websites and whatnot.

One bit of hilarious irony is that the only reason people in the US can access the Daily Stormer is net neutrality (now just de facto), yet the Daily Stormer was full of people calling for the end of net neutrality.


> Why do/did ISPs have to deal with net neutrality while large hosting companies and registrars get free reign?

*free rein


One thing that hasn't yet been remarked on here is that the 48-hour-warning message said that the linked posts were flagged for having phishing links. Notice that the statements made afterward by MS were not on the grounds of the original "phishing links" report, but focusing on the content.

However, if you look at the posts, you'll see a fair amount of unsavory content but no phishing links.

I feel like this was purposeful on the part of whatever political group made these reports.

Most likely, these hosting companies have automated anti-spam/phishing systems whereby, if a large number of reports come in, they will automatically send out an alert to the suspected offending party.

If Outraged Group X goes to the company saying "this site you're hosting has vile speech and so you should take it down," it probably has to go through an internal company process, which might have a slightly higher bar for acting on it.

However, if Outraged Group X falsely claims that "this site is hosting phishing links", something which might trigger an algorithmic response, then the company cannot easily reverse course -- because then the story becomes "Company ABC actively reverses course on allowing hate speech." The PR fallout locks them in.


It's disconcerting to see how many tech and free speech people are clamoring for the censorship of dissenting opinion and are jubilant when it happens.

Free speech has always been a very counter-intuitive process. Opinions that directly oppose yours or even your very person will rub you the wrong way, and to be intent on defending the right of your opponent to voice his opinion seems paradoxical, but we nevertheless conceded that this serves the greater good.

The recent Alex Jones incident was legitimized on the basis of dissemination of dangerous ideas. With Gab, the argument is that it is hate speech. The most compelling argument for free speech is that it removes the subjective and variable limits of speech and by extension, slippery slopes.

If you must know, while I do consider myself a political conservative, I cannot bear to listen to Alex Jones and I've never used Gab. I'm radically free speech which means I enjoy living in a world where literally everyone can say quite literally everything. Whether there is a slippery slope angle to all this is not even relevant for me. I want to hear the ideas of everyone who is broadcasting.

The very arguments I am expounding on are already controversial, but they weren't very long ago. Makes you think.


The free speech absolutists would have been more convincing had they been as up-in-arms about the banning of ISIS or ISIS-affiliated accounts over the last several years. This happened by the hundreds-of-thousands.


For better or worse foreign and domestic terrorist speech is treated separately. If ISIS was a domestic terrorist organization afaik they would have 1A rights.


What's the connection to the first amendment? Did the government dictate bans? I saw very little concern from the "free speech" crowd about the government's ability to censor US publishers from publishing content by foreign authors.


There's been plenty of concern about that. See the Mehanna case from a few years back. Also, the USG was giving Twitter shit about having terrorist content on their site for years.


The right to free speech shouldn't supersede all other fundamental human rights. Why should we afford modern day nazis a platform to call for the extermination of jews, when they so obviously do not respect the right to life or freedom of the people they hate? How can you engage in rational discussion with someone who does not believe you have a right to live?


The same reason we should allow modern day sjws a platform to call for the extermination of whites.


The most compelling argument for free speech is that it removes the subjective and variable limits of speech and by extension, slippery slopes.

[...] I'm radically free speech which means I enjoy living in a world where literally everyone can say quite literally everything. Whether there is a slippery slope angle to all this is not even relevant for me.

Perhaps you'd like to explain the fundamental difference between the outcomes that lie at the bottom of these variably tolerable slopes.


The posts in question are frankly pretty awful, but this does kind of put the lie to the idea that if you don't like FaceGoogleBook's censorship you can just set up your own website.


Well you can host your own website on your own server.

Of course, I do wonder how far this goes. What happens if the DNS server refuses to host the IP lookup? Host your own DNS server? What if the browsers refuse to allow access to the site? Build your own browser? What if ISPs refuse to transfer the data over the wire? Make your own internet?

People have defended that this vs. Net Neutrality (at least the extent of the ability of an ISP to control what goes over its wires) as having a core difference, but it feels to me they are closer than people realize and the standards set by one can influence the other.


> Well you can host your own website on your own server.

Sure, then the people who drove you off Facebook/Amazon/Apple iTunes/Youtube/Azure/EC2/Digitalocean/Cloudflare will go after your DNS registrar and the ISP/CDN/colocation-facility you're using. They won't give up and they'll ruthlessly go after every single commercial entity you do business with. After all, if it's not the government punishing you for speech, it's not censorship and it's fine!

That's the trick of this kind of lawfare -- having any kind of internet presence inherently involves relationships with private, non-governmental entities, and it's disturbingly easy to suborn them and shame them into refusing to do business with unsavoury people.


Yeah, I don't buy that. If Stormfront and Daily Stormer can still be up, then there can't really be any credibility to that argument.


> Yeah, I don't buy that. If Stormfront and Daily Stormer can still be up, then there can't really be any credibility to that argument.

That those two loathsome websites are still up doesn't mean that the technique I described is invalid; it just means it doesn't have a 100% kill rate, and it certainly works on less resourced and less indefatigable targets.

I'm not worried about those specific websites -- I'm worried about the increased spread and normalisation of the "contact everyone %s has a business relationship with, and shame them until they drop them as a customer" tactic. Where does it end? We've established that pressuring domain registrars and hosts is fine; what's fair game after that?

Is it fine for me to pressure and suborn an electric company into cutting off service to someone I don't like (after all, without electricity, they can't make those bad internet posts!). If I can rile up enough of an internet mob, can I get them evicted from their residence, simply by annoying their landlord until the trouble's not worth the monthly rent cheque? I've seen firsthand these tactics used by internet mobs and it's a very ugly process, and it's not something whose prevalence I'm happy to see expand.

Just because in these cases the targets are generally disgusting doesn't mean it's something that should be tolerated (if only because those who you hate will positively relish a chance to use that against your side of things).


> Is it fine for me to pressure and suborn an electric company

By using this analogy, you are implicitly arguing that Azure (and presumably cloud hosting generally) is a public utility and ought to be regulated like one, with tariff controls, neutrality requirements, universal service subsidies, etc.

If that's the argument you wish to make, fine, lets have that discussion.

> Just because in these cases the targets are generally disgusting doesn't mean it's something that should be tolerated

No, it should be tolerated because the nature of the service is not one which has been established to warrant utility-like treatment.

It should be cheered, given that it is already within the class that should be tolerated, because the targets are despicable. But, were the type of service outside the kind that would warrant tolerating this independently of the targeting (presuming its not against a generally protected class for a service offered publicly), then it wouldn't be acceptable even with this target group.


"Is it fine for me to pressure and suborn an electric company into cutting off service to someone I don't like (after all, without electricity, they can't make those bad internet posts!). If I can rile up enough of an internet mob, can I get them evicted from their residence, simply by annoying their landlord until the trouble's not worth the monthly rent cheque?"

The really funny, and by funny I mean sad, thing about your argument, is that you're all upset about this happening to racists and bigots, but you completely ignore that this was a fact of life for many of the marginalized communities that they terrorize.

"I've seen firsthand these tactics used by internet mobs and it's a very ugly process, and it's not something whose prevalence I'm happy to see expand."

You know what's even uglier? The tactics used by white supremacists to terrorize marginalized communities.


> The really funny, and by funny I mean sad, thing about your argument, is that you're all upset about this happening to racists and bigots, but you completely ignore that this was a fact of life for many of the marginalized communities that they terrorize.

No. This is frankly wrong. In my comment (https://news.ycombinator.com/item?id=17730702) that you replied to, I described the websites in question as "loathsome" and "generally disgusting". If that wasn't enough, I will say it explicitly: I do not endorse/support racists or bigots or white supremacists or white nationalists or Nazis or Neo-Nazis or the alt-right or other flavours of fascism.

I find this incident mildly troubling not because of the targets in question (I don't give a solitary fuck about Gab or the Daily Stormer or their users/admins/owners), but because of the precedent it sets -- and specifically that this precedent will be exploited by these racists, against marginalised communities.


Can you tell the story about the eviction which took place because of internet trolls?


The DS's original domain and another subsequent one, was deleted or taken over by the registrar, I thought?


Did you read any headlines in the last couple days?


And like for everything else, first a precedent is created with an indefensible case of terrorism/pedophilia/neo-nazism, then the rule gets progressively applied more broadly.


Or, in the real world, cases like these actually reinforce the applicability of legal rights. Because in the real world, judges are able to protect the rights of pedophiles and neo-nazis while also upholding the laws against child pornography and hate crimes.

In fact, a fair number of modern legal rights were established in cases involving fairly heinous defendants. For example: in recent decades, a lot of non-political free speech cases involve pornography or hate speech.

[Note: terrorism is different because nationality matters. Domestic terrorists get the benefit of their constitutional rights, foreign terrorists do not, especially if said foreigners have renounced the citizenship (and thus protections) of another state. This is why Guantanamo was legally defensible, if morally reprehensible.]


The main difference we are seeing in this new world of large tech companies is that the role of the "judge" is diminished. Groups figure out the fine lines on which to make judgment, but only as a matter of internal company policy, behind closed doors, which none of us get to have a say in or even read the guidelines.

When a judge sentences you to prison, you know pretty well exactly what sentence of which law was the one that did it. When deplatformed from these private-but-ever-growing services, vague "ToS violations" is the best you get. Very easy for "rule of law" to become "rule of man."


A website ban isn't even remotely the same thing as a legal sanction.

If you want to control the platform, create your own. You'll need the hardware and the software, and it could get expensive. But if you want to spread hate speech or libel, suck up the cost.

If you want to be on someone else's platform you accept the restrictions of using that platform. Hate speech is on the list of almost every major platform (except Twitter).


As someone else pointed out, what if they go after your DNS registrar? Your SSL certificate provider? Where will it end? De platforming people who spew vile (but legal) speech isn't good for society and it's contrary to the spirit of the first amendment.


Well it ends with (generic) you running a hidden service on TOR that only committed misanthropes are willing to participate in on a regular basis, and looking over your shoulder in public a lot.

De platforming people who spew vile (but legal) speech isn't good for society

On the contrary, I think it's very good for the same reason I would want to drain an abscess or excise a tumor. The vile speech that you mention isn't harmless, and the people targeted by vile speech are part of society. What benefit is achieved that exceeds the costs imposed upon them?


> A website ban isn't even remotely the same thing as a legal sanction.

The Supreme Court seemed to have very different take (Packingham v. North Carolina):

"Social media allows users to gain access to information and communicate with one another on any subject that might come to mind. With one broad stroke, North Carolina bars access to what for many are the principal sources for knowing current events, checking ads for employment, speaking and listening in the modern public square, and otherwise exploring the vast realms of human thought and knowledge. Foreclosing access to social media altogether thus prevents users from engaging in the legitimate exercise of First Amendment rights."


North Carolina was doing the banning, not the website. AKA, it was a government sanction...

North Carolina was also banning convicts' access to multiple websites, so it was more of an internet ban.

(It helps to read the entire case.)


That wasn't the point of my comment. Yes 1A only applies to government censorship, however the SC clearly view access to the internet and social media as necessary for full exercise of 1A rights. It's now on the legislature to actually write the laws that would grant free speech rights to internet users.


In 2002, only about 5% of incarcerated felons in the US received a trial[1]. The other 95% pleaded guilty, likely in the face of overwhelming pressure, insufficient legal representation, and the threat of many more years in jail if they took their case to trial and loss.

1: https://www.bjs.gov/content/pub/pdf/sc0204st.pdf


Almost all of the people who plead guilty are actually guilty. (I was a former public defender. Most of our clients admitted to committing the crime, they just wanted to minimize any fines or the time spent in jail/prison.)

The Hollywood trope of someone pleading guilty to protect someone else doesn't happen in real life--it's a disbarrable offense in most states for a prosecutor to suggest that they'd imprison a family member (such as a spouse) for X crime if the defendant doesn't plead guilty to Y crime. Moreover, plea bargains, by their nature, apply only to the defendant; other individuals are not parties to the bargain and aren't bound to (or protected by) it.

In some states, inadequate representation is a serious concern. These are mostly red states. People there have repeatedly, over the course of decades, chosen to underfund public defenders. Frequently, this is despite having friends or family who have been arrested and gone through the criminal process. If, after several decades and personal exposure to the inadequacies of the system, they choose to screw themselves over, why should I, in a state willing to fund its public defenders, spend more of my money when those people won't?

The threat of increased jail time is a popular online meme not borne out in real life. Armchair lawyers don't realize that you can plead at any time before the the verdict--literally, up to the second it's read out it. This means that you can wait to see how the case is going before deciding whether to plead. But again, most people are actually guilty, accept that their is a punishment to pay for their crime, and simply want to move on.


> What happens if the DNS server refuses to host the IP lookup?

No need. It's easier if DNS registrars conspire to confiscate and/or refuse to sell domain names. This has already been done. SSL certs can, and have been, revoked. The ISPs haven't really been involved in these types of actions since the 90s, so I'm curious what their stance is. For now, if you've been run off the face of the WWW, you still have TOR hidden services or IPFS. Who knows what will happen when the ISPs get involved.


Personally I would draw the line on requiring ISPs to carry legal IP traffic fairly ("net neutrality"), and pretty much everything else being free game for terms of services etc. I would considering extending the "neutrality" aspect for domains, considering that ICANN has effectively monopoly on DNS.

There are still some practical problems for hosting your own website with those "primitives", most importantly getting an internet connection suitable for hosting anything in the first place if nobody is willing to co-locate your servers. One would hope that with the proliferation of IPv6 and FTTx, actually hosting stuff from your basement would become more realistic.


What happens if the DNS server refuses to host the IP lookup?

That already happened; the Daily Stormer (yep, Nazis) was kicked off GoDaddy, Google, Tucows, Namecheap, DreamHost, etc. https://en.wikipedia.org/wiki/The_Daily_Stormer#Site_hosting...


The only thing you can really do is use tor. But you have to be very, very careful and know what you're doing. It's not something most people can set up, and not something most people can (easily) access.


Well, they kind of shot themselves in the foot by leaving big social media platforms... for a big server hosting platform. If they wanted to go Henry David Thoreau on the corporate internet, why didn't they buy physical equipment?


In your scenario, are the datacenters and ISPs immune to political pressure?


Well, if we had net neutrality, the ISPs would be legally mandated to be content neutral; you can, of course, run your own data center.

EDIT: It may not be cheap, but the “free” in free press is libre, not gratis.


Maybe not, but there are a much more diverse set of ISPs in different countries than there are cloud-hosting providers. We know that some of them are looser with what they will or won't enforce as long as you keep paying them. Can you imagine all those spammers from Bulgarian ISPs hosting their C&C servers on Azure?

I'm not saying there's a bulletproof-safe way for Gab, or any other site for that matter, to host - what if Trump slips Comcast $20 and a caramel tomorrow and tells them to bucket CNN traffic? - but there are certain choices which are obviously more risky than others.


No matter where they end up hosting they'll most likely be breaking some ToS of either their colo or ISP.


It's ToS violations all the way down


The continued existence of Stormfront seems like a pretty strong rebuttal.


they keep getting kicked off all the tlds

even .ru revoked their domain


There are literally thousands of hosts other than Azure. Worst-case scenario, they could run their own servers or use a colocation service.


"Worst-case scenario" for them would be datacenters and colo providers refusing them service. They have ToS too, as does their network providers.


Would you be ok is an islamic terrorist group threatens the west using Microsoft's services?


Having read the article but not knowing anything about the posts themselves, I think most of this thread misses the point.

Clearly Microsoft knew it was hosting Gab and what Gab was. There's a troll argument that suggests Gab is just "free speech twitter", but of course that's not the case: I've been screenshotting the front page for months, from a random anonymous account, and every time I've done it the front page was full of horrible Islamophobic, racist, and anti-Semitic crap. That and bot content.

My point here is: everybody knows that's what Gab is. Microsoft isn't pushing back on Gab's anti-Semitism --- without anti-Semitism, there is no Gab. They had specific harm reduction problems with a pair of posts, one of which, according to this article, was a call for violence directed towards Jewish people.


> learly Microsoft knew it was hosting Gab and what Gab was. There's a troll argument that suggests Gab is just "free speech twitter", but of course that's not the case: I've been screenshotting the front page for months, from a random anonymous account, and every time I've done it the front page was full of horrible Islamophobic, racist, and anti-Semitic crap. That and bot content.

This seems to accurately describe "free speech twitter". All the content that would get people banned on twitter is displaced into sites like Gab and 4chan. The few occasions I've perused these sites supported this: there were lots of hateful speech against minorities, but a few instances of hateful speech against the majority. A very narrow minority were the latter, but as far as I could tell these sites upheld their commitment to freedom of any legal speech for all and didn't censor them.


If there is a large enough group of people that's effectively banned from mainstream social media, it's entirely natural that they will be over-represented within the largest platform that doesn't ban them. This doesn't mean the platform in question is designed specifically to cater to those groups.


So? Andrew Torba is obviously a neo-Nazi.


Again: this was flagged, but Patrick isn't trying to be inflammatory. I think it's pretty reasonable to suggest that about Torba.


It's not even within a football stadium of being reasonable.


Maybe it'd be more fair to say "whoever's actually managing Gab". Again, see upthread.


I read the thread. It's still not a reasonable thing to say without proof and you don't have any. If I used your logic, it would be entirely acceptable to say youtube supports jihad against the west because it doesn't actively censor Al Qaida recruitment videos. If you had a screenshot of Torba saying he was a Nazi and believed in National Socialism, then it would be reasonable to say.


Here's another for you:

https://twitter.com/Popehat/status/1027671587825172480

You sure you want to keep digging into this sewer? I'm sure there's worse we'll find.


[flagged]


I don't think you're going to make yourself any happier trying to litigate this point.


Why not? I'm right and the facts are on my side.


Maybe there's some kind of line that divides "white supremacist who spews Pepe memes and vile anti-Semitism" from "neo-Nazi", but really, does anyone care?


The only thing somebody has to do today to become a "white supremacist who spews Pepe memes and vile anti-Semitism" AND a "neo-Nazi" is vote for Trump. So, more than 51% of the country right there very much care.


Luckily, we don't have to have the argument about whether Trump voters are unfairly treated, because the evidence that Torba is an odious anti-Semite is clear and obvious to all but the willfully blind.


The original story, is apparently not available at this link, but the footer, also written by Andrew Torba, owner of Gab.ai , is available: https://medium.com/@Torbahax/this-is-a-joke-and-it-is-fake-n...

partial quote:

" Gab is absolutely not a “white nationalist social media platform.” We are a free speech social media platform. We welcome everyone and have since the day we launched. My co-founder Ekrem is a Muslim Kurd in Turkey. Our Chief Communications Officer Utsav is an Indian and a practicing Hindu. Our “frog logo” that the media wants so desperately to tie to “pepe,” was inspired by Exodus 8:2–7 and was designed by our Creative Director Brandon, who is Jewish."


That's an interesting thing for him to say, because Gab is absolutely and obviously a white nationalist social media platform.

Want another example? Arguing with Ken White (Popehat) on Twitter, the official Gab account RT'd a white nationalist mocking Ken for having adopted Asian children.

https://twitter.com/Popehat/status/1026849669425520640


I think it's important to distinguish between platforms that host content of a given topic, and platforms that are specifically set up for a certain topic. For example, Reddit has /r/motorcycles, but I wouldn't call it "a motorcycling discussion platform". I'd call advrider.com or bayarearidersforum.com motorcycle discussion platforms. They're websites specifically set up for motorcycling related discussion. A website that takes a strong stance on free speech like Gab probably has a significant overrepresentation of White Nationalists, by virtue of displacement - most other big platforms ban this content so they migrate to the few places that do allow it. In that sense I think it's fair to call Gab "a platform that tolerates white nationalist" but not "a white nationalist platform".

Equating tolerance of an idea as endorsement for an idea is the root of a lot of problems we see today, in my opinion.


Respectfully, I think you want that to be important, and in the airless vacuum of a message board it might be, but common sense says that a "free speech" venue that is absolutely overrun with white supremacist speech, run by a committed white supremacist, simply is a white supremacist site.


Are you just going to ignore the fact that the official Twitter account for Gab actively promotes white supremacist talking points? It only takes about 30 seconds of scrolling through their feed to see this.


I scrolled for not only 30 seconds or so but at least a minute. Nowhere do I find Gab actively promoting white supremacy. As far as promoting "white supremacist talking points" this is a pretty substantially different actually supporting white supremacy, and support for that allegation is still pretty vague. The closes things I found are:

> “Hate speech” is free speech, as decided unanimously by the Supreme Court of the United States of America.

Sure, white supremacists may be more keen to protect hate speech but this statement is factually correct.

> Right-wing platforms provide refuge to digital outcasts — and Alex Jones

Again, hosting a speaker is not the same as promoting or endorsing what is said. By this logic, Berkeley endorsed Milo Y.

> At some point you have to ask yourself: just who is pushing for the censorship? Once you learn who, there is no going back. You can’t unsee what has been seen. You can’t unlearn what has been learned.

Arguably echoes allegations that Jewist people control the media, but this is vague at best.

Furthermore, the account states that it would not ban left-wing speakers that had written hateful message about whites:

> The @verge defends the anti-white hate of @sarahjeong then turns around and maligns Gab as “alt right” for defending free speech for everyone (including Sarah!)

I think official Gab twitter account is consistent Gab's claim to allow all legal speech, including hateful speech regardless of political leaning.


> Arguably echoes allegations that Jewist people control the media, but this is vague at best.

Don't insult our intelligence. It is obviously a vile statement about Jewish people.

http://archive.is/q8Guu if you need it to be even more obvious.


Andrew Torba is obviously a neo-Nazi, so can we put this argument to rest.


This comment was flagged. It's not, like, the most substantive comment on the thread, but it's not un-substantive. Patrick isn't calling someone here a neo-Nazi, as an epithet. He's pointing out that Torba is one. I think he's probably right. If he's wrong, make an argument; don't abuse the flag button.


Are there known instances of Gab removing left-leaning content, or content speaking against white nationalists? That is the only relevant metric here. Tweets from their official account have nothing to do with it.


A metric for what? It's obviously not a sensible or relevant metric for 'is Gab a white nationalist platform'.


I just created a Gab account to try to find hashtags or accounts that are not WN.

There are some, but of course, Gab just passed 500K accounts (no idea how many are active: they are privately held and don't publish a breakdown unlike publicly-traded TWTR, which has 335 million monthly active users) so it's tough to find exactly which non-WN communities there might be...

I am not sure if you are saying that Gab is "majority WN" or that it is "exclusively WN". There are certainly other users (of non-white ethnicities), groups and hashtags represented:

Kenyans group https://gab.ai/jojiwanyoike

Search for "Iyer" a typical Indian (Tamil) Brahmin last name: https://gab.ai/search/iyer


> I just created a Gab account to try to find hashtags or accounts that are not WN.

You did what Andrew Torba wanted you to do. He wants people to come for "free speech", to "give it a fair chance", and so forth, so that they can be exposed to, and grow to accept, neo-Nazi rhetoric.

It's a scam. Don't fall for it.


There's already a bunch on Twitter, where I've had an account for a while.

So it's too late for me, but save yourself...!


Who is this argument trying to kid? Gab is a white supremacist message board. I just reloaded the site 3 times, and got alt-right memes on all 3 loads. The only reason anyone is having a hard time seeing this now is that Gab is dying, and half the posts are innocuous bot content they inject to make it look like the site has activity. Like I said across the thread: even the white supremacists are giving up on the place.

Are you really maintaining a Gab account? Why? You could also wear a swastika armband on the same premise: "I'm demonstrating my right to wear what I want, without being coerced by society". Sure, man. But also you're a dude wearing a swastika armband. Reconsider.


"I just created a Gab account to try to find hashtags or accounts that are not WN."

As I clearly wrote, I created a Gab account in order to look around. I've not made one post nor followed anyone.

Please remember that many HN followers look up to your posts...


Yes, and then Patrick said "don't fall for it", and you suggested that Twitter was just as bad, so I wrote a comment that both rebutted the idea that Twitter is just as bad and asked if you were really (un-ironically) keeping the account.

It sounds like your answer is no! I think: good call.


[flagged]


Cowardly obscurantism is the standard tactic of white supremacists and others with loathsome views. They know that if they come right out and say what they believe, they will get nowhere. So they engage in slight of hand and dog whistles in order to recruit the gullible. Andrew Torba's entire platform is built on this obscurantism.

It would be an insult to my intelligence to claim that the target of this tweet is anyone other than the Jews, for example: https://twitter.com/thetomzone/status/1027695084236668928


Yikes, hadn't seen that one. That really is too bad, I hope this attitude doesn't permeate the company, because the platform is gaining steam.


No, it isn't. It's floundering, because it is a site that caters to white supremacists, with an aspirational sideline in catering to normal people. Among people who have any name recognition for Gab whatsoever, simply having a Gab account with your name on it is a pretty icky social signal to be sending.

In practice, what I see as I regularly check up and screenshot it is a whole lot of bot content punctuated by an occasional neo-Nazi post. I don't even think the white supremacists are all that invested in it.

Maybe the ICO will work out for them.


> In practice, what I see as I regularly check up and screenshot it is a whole lot of bot content punctuated by an occasional neo-Nazi post. I don't even think the white supremacists are all that invested in it.

For what it's worth, I think that's just the effect of the default feed. If you actually follow anyone it completely changes. If what you're saying about the bots is true, maybe I don't see them so much as a result of a sort of WoT effect by follows.

I certainly find the default feeds grody, but after adding "jq" and a few others to my muted words list, it's worth looking at some of the feeds for the occasional hidden gem. I will say, the space is not rich and diverse like Twitter used to be (yet).

I honestly really liked the Twitter crowd, but it seems like conversation on Twitter is dying; I shuttered my account a while back, and if Gab doesn't grow a broader base I don't really know what there will be to take its place.

Minds doesn't look too bad, but it seems to suffer from a (milder, potentially thanks to the influx of desperate refugees of Facebook in Vietnam) form of the same disease: being populated largely by untouchables (some so vile they cost you job prospects by mere proximity).

> simply having a Gab account with your name on it is a pretty icky social signal to be sending.

Yeah, I have it on good authority that merely having a Gab profile has cost me a job offer, which is why I closed it a few months ago. Reopened it today because if I die tomorrow I'd like to go with my spine intact.


Why would any part of your identity or self-worth come from maintaining an account on a dying white supremacist message board? Reconsider.


I disagree with the idea that merely having an account on said website should imply that I am immoral. I know of a number of good, honest people who have Gab accounts, and I don't want to live under the control of the bigots who would judge me that shallowly (in much the same way allowing myself to be cowed by other bigots would be wrong).


So you've got 'fine people on both sides' and 'they're the real bigots'. If you think these are somehow substantive or convincing rejoinders, you should probably re-think that. Mostly they just make you sound like a Nazi online.


... or, at least, someone who fits most people's mental model of the kind of person who maintains a Gab account.


Exodus 8:2-7 alludes to an ultimatum threatening to inflict a plague of frogs on Egypt. Because when I want to demonstrate my love of freedom and inclusivity, the first thing I think of is calling down plagues on my enemies.


> I've been screenshotting the front page for months

Out of curiosity, why?


I honestly don't know why. Something just made me think I was going to want data about it someday.


Is this something you do manually...?


Yes.


Do you have a (Twitter? ;-) feed with these screenshots?


I'm getting pretty close to getting some Citation artwork commissioned.


You want me on that wall, you need me on that wall, &c.


Just for kicks made an account and the front page currently looks like this for me;

Popular Posts

1) Thug Who Destroyed Trump’s Star and Bragged About It Gets Hit with Felony Charge, Facing Hard Time

2) Alex Jones Breaking: Son Of Terrorist Master Mind Caught Training Child Soldiers To Commit School Shootings Tune in M-F 8-11a

3) Candice Owens Tweets: BREAKING: 71 Illegal aliens were shot, trying to cross the border this morning! —Just kidding. It was actually black people in Chicago last weekend. You can go back to not giving a damn, liberals.

4) Military Support: ‏Honoring Air Force Maj. Walter D. Gray who selflessly sacrificed his life six years ago in Afghanistan for our great Country. Please help me honor him so that he is not forgotten.


Well, I don't have an account, but this blew up sufficiently enough that I only needed to click "explore" and stumbled on a repost containing a screenshot of the original post. Here you go: https://gab.ai/PNN/posts/31263515

Wow, what a sad website. I don't quite understand the mindset/vitriol/hate/anger/etc here (the reason for it or need for it) psychologically speaking.

In any case I've taken my own simple screenshots of the repost in case it disappears, which I expect will happen eventually. If anyone absolutely needs them let me know.


> Wow, what a sad website

How is a website sad just because it offers true free speech? I think it's sad the the perception of free speech is being tainted in such a bad light. Free speech is _only_ relevant if it covers the controversial things, too.


Right, I'm referring to the implementation, not the theory/back-of-napkin goal.

I say that this particular website is full of what I would broadly/sweepingly describe as toxicity. Not 100.00% full - there are surely a few people in there that are balanced - but I have no inclination to field the noise ratio to find them; the process would be too depressing.

This is a rough description of what I mean by "sad".

I do completely agree that free speech is necessary - for things like oppressive dictatorships in foreign countries, hardware ownership versus do-not-debug laws, and broadcasting events that the mainstream media have blacklisted.

I don't think free speech is for racism, which I don't believe in in any case: we all bleed red, there's no point going rabid over an arbitrary bunch of people, since any grouping (of anything groupable) that scales large enough will net you all the "look how bad this is" statistics you could possibly want, which completely devalidates the basis of racism et al in the first place (IMO).


> I don't think free speech is for racism

"Free speech without x" is not free speech.

Also you don't need to look to look as far as "oppressive dictatorships in foreign countries" for a need for free speech. Over 50 people had their homes raided in the middle of the night for criticizing the government.

edit: in Germany I meant


For what it's worth: To the user, Gab is just "free speech Twitter". It just happens that the people whose speech is being repressed say things which make you want to repress their speech. Twitter used to be "the free speech Twitter" for many of these people, and their arguments shared a platform with unhindered common speech (and, as was my personal pleasure, were rebutted by many people who were at no personal risk of being repressed).

There happen to be plenty of people who go around collecting and rebutting white (and other ethnic) separatists/nationalists on Gab, and they aren't being repressed (in fact, some are receiving tips for the service).


> My point here is: everybody knows that's what Gab is. [...] a call for violence directed towards Jewish people.

Twitter is full of racism and calls for violence (even genocide) against white people


You've been using HN primarily for political and ideological battle. That's an abuse of the site and we ban accounts that do it, regardless of their politics. Please review https://news.ycombinator.com/newsguidelines.html and use HN as intended from now on.


Threats of violence already violate Gab's ToS:

"Users are prohibited from calling for the acts of violence against others, promoting or engaging in self-harm, and/or acts of cruelty, threatening language or behaviour that clearly, directly and incontrovertibly infringes on the safety of another user or individual(s)."

https://gab.ai/about/guidelines


I wonder what this says about Gab's reaction.

Why would they give Little the option to delete his posts instead of banning him and removing the posts themselves? If I was in their position, it wouldn't even be a question of free speech considering he was breaching the ToS.


The content here is quite disgusting, and I certainly do not defend it, but I really take issue with what is happening here in principle.

Overall this is similar to a runaway effect that negligence of the environment can have. At a certain point it will be too late to solve the problem i.e. control emissions or oppose this kind of authoritarianism under different circumstances. The very means to do so will either be impotent or impossible.

By then you are reliant on some new magical invention to resolve the problem, or a large death toll in the millions, which changes the entire landscape.

Corporations certainly have the right to do this, but they are only choosing to do so because there are strong winds blowing in this direction. It's important that they are opposed, before this paves the way for big mistakes to be made.

It's important that these kinds of ideas see sunlight. Ideas that promote tolerance, peace and common sense will win in the end, provided we keep talking. If talking comes to an end, we know what comes next. These ideas will only find more room to grow, and see greater validation when they are opposed in this kind of authoritarian manner.

Rather than advocating a strategy where we are relying on governments and corporations to solve these problems, we should be the ones fixing our societies and arguing against disgusting ideas such as this, with well reasoned arguments, or even humor, both of which have historically been great at winning ideological battles.

I am sure this crowd knows their Aaron Satie.

"With the first link, the chain is forged. The first speech censured, the first thought forbidden, the first freedom denied, chains us all irrevocably."


I hope this crowd knows their Joseph Goebbels.

"When our enemies say: But we used to grant you freedom of opinion -- yes, you granted it to us, that is no proof that we should do the same for you! That you gave us that just proves how stupid you are!"

(source: https://de.wikiquote.org/wiki/Joseph_Goebbels and https://www.sr-mediathek.de/index.php?seite=7&id=37143 ; translation by me)


So take Goebbels' advice? Not sure what you are getting at.

Our enemies are fascists who will take away our rights, therefore anyone suspected of eventually becoming a fascist will have their rights preemptively taken away so that it does not come to that?


Little's posts on Gab are awful, and Microsoft is in his rights to refuse service to anyone. However they don't have the capacity to do a fair manual triage of all the services they are hosting, and they should avoid doing it until legally constrained.


Too lazy to actually look it up but I think that MS are getting ahead of the lawsuit from Europe. Europe will fine you X dollars for every second / minute / hour or something that a bad post is left online. So even if GAB does not comply with EU regs then Azure surely will and EU will go after the weakest link. MS knows this, and they don't care about GAB or Free Speech or whatever so they're going to shut them down if they don't comply.

IMO Europes rules are designed this way on purpose, they are sort of exporting their rules, and eventually if that's allowed to continue the internet will devolve to the lowest common denominator. Whomever has the biggest most enforceable fines rules the internet.


I don’t think there is a pan-European set of free speech laws. These are per country.


> However they don't have the capacity to do a fair manual triage of all the services they are hosting

Perhaps, but that's not really an argument against complaint-based ToS enforcement. The US (or any other) government may lack the ability for a complete and fair manual triage of all behavior subject to their jurisdiction, but few people argue that complaint-based enforcement of criminal law is therefore unacceptable.


I don't follow your reasoning. Why is it wrong to refuse service to "bad" customers as they come to your attention, but it's ok to do it if you review everyone at once?


DMCA process is far from perfect, but is a good example. Everyone can fill a DMCA, but if DMCA are abused, you can sue back.

Banning Gab for what one of their user did seems outside of any process.


> Little’s posts had advocated for physically harming Jews.

Fighting words are not covered by the first amendment according to the established supreme court interpretation. Regardless of your views on freedom of speech, it is not in any way relevant to the posts in question. The court would certainly uphold Microsoft's decision to step in.


I have an alternate solution especially for Gab and Voat. Rather than allowing them to become known cesspits, advertise them to people and say, "see how others' opinions exist" - people will sign up for curiosity, they'll sign up to argue, etc. But most importantly, they'll be there on that platform and perhaps some people will actually learn to communicate openly with people they disagree with or whose views they despise.


That would honestly be a pretty good pivot. The trouble with Gab and Voat is that if you only bill yourself as a "free speech alternative," you get almost-exclusively populated with the sorts of people who have to worry about that (i.e. fringe characters). The community becomes strongly influenced by the "founder effect" of the digital refugees that made up its founding stock.

It would be interesting to have a sort of two-column format, where one column is who you normally follow, and the other column is an auto-generated list of people in the most opposite social-graph cluster.

The lifeblood of places like Twitter is the ability for people to get into loud arguments that people feel matter more than they actually do. If Twitter ever actually did a full purge of one side of the ideological spectrum they'd find a massive decrease in interest. People want to fight deep down and so need the other around.


Basically yes - then again what I'd like to see (and which I do see somewhat on Voat) is that you have fringe people on all sides - and yes, I mean all, not "both" - there's more than two sides to most human issues.


Yeah, no. People will see the vile toxicity right up front (it's not hidden at all there), and they'll leave in 5 minutes.


I think I understand what you're saying, but if it was so vile and so toxic, why would people keep returning to it and stay so involved? Whether you happen to like it or not, these are communities just as much as any other. I think it would be better for the human condition if, instead of dismissing them as "vile toxicity", we looked at what they're saying and what's caused them to feel a need to say it.

Then again, that takes more than five minutes.


Their opinions are plainly wrong by any modern standard (i.e. killing "subhumans", etc.), so there's no merit in approaching them on that front.

Though as you say, it might be necessary to understand them to prevent such extremism from spreading. But at that point it's very well worth asking if all that 'understanding' is really more effective than just banning such platforms.

Worked for Reddit, for instance. There was a study that showed that the prevalence of hate-speech was decreased after they banned the racist subreddits. Even people engaged in those communities became less extreme after you've removed their echo chamber.


It "worked" for Reddit because people moved over to Voat or back to the chans. Just because the community moves off a certain platform doesn't actually change the existence of either the people or their opinions. I want to make clear that I agree with you that an opinion that terms any member of genus Homo as subhuman is wrong; and that killing is also wrong. On the other hand, I for one am curious about how these memes originated and continue to propagate; and I feel that direct engagement is the only way we'll find that out. Massive cultural impact cannot be made by simply avoiding other opinions. After all, Columbus massively underestimated the circumference of the Earth compared to Eratosthenes - but the Portuguese still funded his travels, and his trip was successful, even if by complete accident. Engaging with those who have wrong opinions can sometimes have unexpected positive impact.


Because the people who are there are the ones who agree with that vile toxicity.

"we looked at what they're saying and what's caused them to feel a need to say it."

While that might be needed, Gab is not the place to do it.


Where, pray tell, is the proper place to engage with these people as fellow human beings then in your opinion? Is it loosely defined as a place that the rhetoric and world-view you prefer (and which I probably prefer as well) rules with an iron fist, with people who are actively censoring what someone says and forcing them into changing the way they address you? Over time, lines have been drawn, and many people have become disenfranchised based on the unwritten rules of certain Web communities. I'm not suggesting for a moment that you'll change your mind suddenly and agree with their views - or that I would. The people who feel strongly about certain issues have formed communities, and they supposedly defend the right and virtue of free speech. So, speak to them! Freely. Don't seek to control, but to understand. And then when you do understand, if you can find common ground, work from it. If not, sit back and consider where your morality differs so distinctly from another member of the human race that you can find no core values in common. I've found doing that makes for a much better result in my dealings with both left-wing and right-wing radicals - because I can usually find at least something we both see as positive.


"Where, pray tell, is the proper place to engage with these people as fellow human beings then in your opinion?"

Somewhere not blocked by the anonymity of the internet. When online, especially at a place like Gab, or even Twitter, people are predisposed to behaving in more of a trollish, uncharitable way. In person, many of those behaviors don't happen.

"with people who are actively censoring what someone says"

I'm sorry, but I can't continue after this. You set up this sob story about not treating those on the alt-right as people, yet decry others telling the alt-right that they have to treat others as people, and not, for example, say that Jews should be treated as livestock. Its part of this really shitty mindset that people must tolerate the alt-right, but they don't have to tolerate anyone else, and should be encouraged, even, to shit on others.

If they want respect, they need to show it to others. If they're not willing to do that, then it's no surprise that they feel they get no respect in turn.


Seriously. People have better things to do with their lives.


This is a very clear example of hate speech and people in the comment section seems to unknowingly or deliberately conflating it with free speech. Free speech does not mean you call upon people to incite violence against a particular set of people.


Yes it does, actually, whether you like it or not. But maybe you'd like to call into question whether such speech should still be protected as free speech. The difficulty is in establishing where you draw the line.


I spent done time digging through the posts (and researching the view points). My conclusion: This article and the Twitter posts feel like an advertisement for Gab. It's currently building a userbase, primarily from people banned from Twitter for expressing extreme views. In this case it's capitalizing on a very well known user, a stance supporting the user, and the user self censoring.

Something feels off (see my quote). This is timed around Gab seeking funding (see Twitter: https://mobile.twitter.com/getongab/status/10275500729328517...)

I suspect further exposure will push Gab to censor. However in the interim: it will grow based on the controversy to ban/remain open

quoting my own reply >Researching Gab: the posts from Gab was immediately followed by a funding request. Users pointing out how it appears to be "Twitter for racists" add (cherry picked) posts highlighting the advocation of destruction if Jewish memorials (and enslavement of the Jewish people).

>To comment on the article: this appears to be shining the sun on what appears a racist bastion advocating violence. (I might be wrong. Gab didn't remove the posts, the user did) Or to rephrase Microsoft's threat as I'm seeing it: if [Gab] does not remove a post [advocating violence] , Gab must seek hosting on another platform.

>So which strategy worked? Ban or exposure? For Microsoft? For Gab? For the user?

>I don't know that the answer is the same for each group. I suspect given the article, exposure pushed Microsoft to push for a ban. A ban pressured the user, by thinking their platform. So where does Gab stand?


Who are the organisations pushing for these bans? Are they completely anonymous?


It could also be just Microsoft employees:

> Late yesterday, Dorsey posted a tweetstorm explaining Twitter’s decision not to ban notorious conspiracy theorist Alex Jones ... At that point, current Twitter employees got in on the action. Marina Zhao, a senior software engineer on the Twitter Moments team, called out her CEO for muddled thinking


Who said they are organizations? It to doesn't take much to see some content, do a little research to identify a responsible host, and file a ToS complaint.


Yeah. It just seems very coordinated. I would have thought had it been random individuals there would be a lot of chatter and cheers when a call for a ban succeeded. And some chatter on who to pick. But I could be wrong.

Just wondered if someone could put a face on who is pushing for this.


Alpha Delta Lima. (Apparently spelling out the acronym gets your post blocked???)


[flagged]


Looks like the ADL didn't like my comment. /s


We've banned this account for using HN primarily for political battle. That's not what this site is for, and we ban accounts that do it, regardless of their politics.

https://news.ycombinator.com/newsguidelines.html


[flagged]


Just to be clear, his "political views" are a call for violence.


Interesting, you see I see calls of violence on Twitter all the time but their hosting company isn't threatening to ban them. And if you mean "me" by "his": I don't call for violence.


'his' above refers to Patrick Little, so unless you're him further discussion is moot. If you are him, then we have a disagreement.


Twitter hosts its own servers AFAIK.


We just asked you to please go read the guidelines and follow them. Could you do that now? We don't need any more flamebait here.

https://news.ycombinator.com/newsguidelines.html


Actually I responded to that, and you never got back to me.

See here: https://news.ycombinator.com/item?id=17671916

Feel free to ban me though, this site is the worst echo chamber I've ever seen, hiding behind selectively enforced rules.


I'm not convinced this is a true example of censorship. Historically, censorship relates to an institution's ability to prevent information before it becomes public, or to manipulate information to serve its own means. It assumes an almost complete control over the means of information production, whereby a populace barely even sees a glimpse of the censored information. Since Little's neo-nazi propaganda is public and is being openly discussed, he cannot truly be censored. It's more a publicity stunt by Microsoft. There is certainly censorship in the U.S., but this doesn't seem like a true example of it to me. I also think it's a slippery slope fallacy to say that removing a forum about torturing Jews (and other hate speech) will inevitably lead to totalitarian censorship of all provocative ideas. Please show me an example where a government or a private institution withdrew its support of speech like this, which then led to the wholesale censorship of free speech in a country.


Microsoft should be able to decide who they do business with. That's typical contract law almost everywhere.

If they decide get certain content is against their own values, culture and risks the safety of their own staff, even if that risk isn't an immediate one, they should always have the option of termination the contract.

That's not censorship, that's business. A company doesn't owe anyone their service, unless they provide a public utility or because they're a monopoly. In terms of web hosting, neither is the case with Microsoft.


Microsoft's comments are in keeping with a common misconception about the First Amendment.

For clarity, the relevant part of the First Amendment is here:

> Congress shall make no law ... abridging the freedom of speech, or of the press ...

Nowhere does it say anything about private companies, other groups, or even other branches of government having to obey this rule.

I don't have data on how many people actually believe the First Amendment lets them say whatever they want wherever they want. I'm guessing it's a pretty high number, though.


Or just use this secure key fob holder: https://youtu.be/-MILCnDczC0?t=9m17s


From Robert Bolt's A Man For All Seasons (substitute "principle of free speech" for "law").

William Roper: So, now you give the Devil the benefit of law!

Sir Thomas More: Yes! What would you do? Cut a great road through the law to get after the Devil?

William Roper: Yes, I'd cut down every law in England to do that!

Sir Thomas More: Oh? And when the last law was down, and the Devil turned 'round on you, where would you hide, Roper, the laws all being flat? This country is planted thick with laws, from coast to coast, Man's laws, not God's! And if you cut them down, and you're just the man to do it, do you really think you could stand upright in the winds that would blow then? Yes, I'd give the Devil benefit of law, for my own safety's sake!

-----------

People in positions of power generally assume that they will always be in power. Many of people during the Obama presidency, wished that the president was less susceptible to gridlock between the various branches of government. In January 2016, most people never even considered the possibility that in 1 year, Donald Trump would be sworn in as President.

Right now, the progressive, center left is in a very strong position with regards to the tech industry. All the major players are sensitive to their causes, and against the alt-right. A lot of people are unhappy that the current norms of free speech are causing the social media companies to be slow in taking down alt-right, hate content.

People forget, that things change. What if Zuckerberg decides that as a rich, white, male who is accused of indirectly helping Trump get elected, his future political prospects are better if he goes all in and casts his full support behind Trump? What if Twitter continues to have monetization problems and get bought out by a group of wealthy GOP donors?

How easy would it be to call any discussion of European colonialism and imperialism or of slavery in the United States, hate speech against whites? How easy would it be to label any discussion of increasing taxes because the 0.1% are not paying their fare share as hate speech against the rich. How about labeling any post advocating protest against ICE as targeted harassment?

If you knew that in 5 years, it would be the right that was running Facebook, Twitter, the ISPs, etc, what mores concerning free speech would you want to preserve, even when it involves a private entity.

I think a successful democracy in a polarized country involves both sides realizing that there will come a time when they will be out of power and encouraging cultural mores and norms that make sure they are able to communicate freely and have employment even when the other side is in power - either in government or in the private sector.


"The Net interprets censorship as damage and routes around it." -John Gilmore

Lately this is getting absurd. I believe this quote is still true, but the amount of 'I don't like what is being said so therefore you should be silenced' is going too far. I don't agree with what this guy said (or even know who he is aside from what was said in the article), but I'm alarmed that Microsoft would go after Gab for a user on Gab's platform.


Lately we're sort of learning that absolute freedom to express any opinion you like is kind of a bad idea, especially at web scale. The example of Germany, which is freer and more democratic than the USA despite having strict laws forbidding expressing any sort of Nazi-like opinion, supports this.


I disagree. I would rather live with the freedom to express myself, be who am I, believe what I want, speak my mind and deal with consequences of it than deal with suppression and censorship. In the US its not that we're learning this is a bad idea granted I will say it is under attack. Its more that some people can't accept that people might just be different or have different thoughts than them and would rather say its a bad idea (easy way) than to engage them in conversation or accept that we're all individuals with different thoughts and backgrounds (hard way) or that being offended shouldn't be allowed.

Germany is not freer. They do not have Freedom of speech close in terms to the US. One easy example is Section 185 of Germany's criminal code which you can be punished for insults. Section 90 is also interesting and I bet people would dislike that one if we had that currently in the US. If you see those as freeer by all means have it in Germany. I'll stick with the 1st amendment.


How is Germany more free and Democratic? Aren't they purposefully less free because of anti-Nazi laws?


Germany consistently places higher than the USA on international press freedom indices. The press, in general, is a protected institution there, whereas here, blowhards like Trump do their damnedest to intimidate journalists out of doing their job, which is to expose the dealings of government to the public.

And you don't have to read Hackernews for long to see how terminally fucked and anti-freedom the U.S. justice system is. Things like plea bargaining, money bail, and systemic racism in law enforcement and criminal proceedings make U.S. "freedom" illusory unless you're white and wealthy.

As for democracy, it's well known that votes in Congress can be easily bought in the US, far more easily than in Germany's parliament.


How is this freedom attributable in any direct way to restrictions on "hate speech"?


Well, you could argue that Germany successfully avoided renazifying in part because of its anti-hate-speech laws.

Even absent this, the point is that Germany is, in practice, freer than the USA despite having a less absolute stance on free speech.


But because of those, other groups whom the Nazis and other alt-right groups would have targeted are more free to participate in the conversation.


Characterizing this kind of thing as just "I don't like what you said" is not being honest with your argument.


Who can make better tools for IPFS and other services that are more difficult to take down?

The tooling is the main thing stopping people from using more autonomous decentralized services.


"The Net interprets censorship as damage and routes around it." -John Gilmore

Lately this is getting absurd. I believe this quote is still true, but the amount of 'I don't like what is being said so therefore you should be silenced' is going too far. I don't agree with what this guy said (or even know who he is aside from what was said in the article), but I'm alarmed that Microsoft would go after Gab for a user on Gab's platform.


Religions? Do they've all scientifically verified ideas? People still believe in them. Not only that, religion is also given special place in the constitution and law of some countries.

Religion is still going strong. Try to take away anyone's religion based on refutations then tell me that refuting bad idea really works.

Till then many others will try to mix mash bad ideas into new religions.


Good for Microsoft for drawing the correct line in the sand. Calls to violence are not protected speech.

The Alex Jones censorship is another matter. His stuff is pretty crazy, but not worthy of being taken down. Louis Farrakhan says lots crazy stuff and I don’t believe he’s been censored at all. There is definitely a double standard at play.


> Neo-Nazi deletes anti-Semitic posts from 'alt-right' Twitter

Who writes these terrible headlines? Why do these blogspam sites refuse to put the actual topic in the subject?

This makes it sound like Twitter is alt-right. It's not even accurate since Gab isn't political or "alt-right," Nazis use it because they refuse to censor.


> Gab isn't political or "alt-right"

haha, good one!


its the last recourse for the alt-right because other platforms censor them. I'm sure if you posted far-left stuff on there it wouldn't be censored either, but there is no point because most social media platforms don't censor far-left thoughts anyway.


> most social media platforms don't censor far-left thoughts anyway.

They do, though.

https://www.counterpunch.org/2017/08/09/google-censors-block...


google results are a social media platform?

And i said "most".


Well, there are all the people who point out that Twitter will take ages to respond to a report of a conservative-leaning account making threats and will usually end up saying "that doesn't violate our guidelines". But somehow quickly suspends left-leaning accounts which say similar things. Especially if you say them in the direction of a blue-checkmark conservative-leaning account.


Google is supposedly a "common carrier". Though I'm not sure how they're hanging onto that legal title.


I wish it wasn't, but it is. It's a decent platform with a lot of potential, and the filtering that they put in place early on for inbox abuse were way better than anything Twitter ever did.


[flagged]


This a great example of a comment from which we learn little but are agitated much.

> Eschew flamebait. Don't introduce flamewar topics unless you have something genuinely new to say. Avoid unrelated controversies and generic tangents.

https://news.ycombinator.com/newsguidelines.html


This is not relevant to net neutrality as the phrase is traditionally defined.


Neutrality is about the pipes, which are a natural monopoly. It has nothing to do with the users of the pipes.

Microsoft has a right to choose not to participate in broadcasting speech that it disagrees with, and the reason that's OK is because there are many other platforms for Gab to move to.

For many (most?) people, switching to a different ISP is impossible or comes with severe drawbacks.


Microsoft is an edge provider, not an ISP; net neutrality is about stopping ISPs from controlling edge providers to consumer contact, largely. It's not about regulating edge providers.


net neutrality has nothing to do with freedom or neutrality of speech online, especially in regards to a private company's terms of service.


insert picard facepalm here.


Ahh yes, multinational corporations worth hundreds of billions, also known as the left. Insightful.


The only two other comments in this thread are saying that this is "good".


I don't think other people see the connection to "the left" or net neutrality.


Nobody on "the left" has ever argued that companies can't have a terms of service that prohibits bad behavior.

Net neutrality has always been about preventing the companies that control the last mile for consumers from extorting additional money from server operators that are already paying their own ISP for bandwidth.


Not quite nobody. There were arguments that without Net Neutrality ISPs may develop TOS that target certain data/content services with paywalls, throttles, or blocks - to the point where politically motivated censorship would be a big concern.

This censorship argument played a core part in the Net Neutrality campaign. Note the specific reference to censorship being the biggest problem in the headlines and leading paragraphs of the following popular articles:

1) https://www.nbcnews.com/think/opinion/ending-net-neutrality-... 2) https://qz.com/1158328/what-will-happen-now-that-net-neutral... 3) https://www.vox.com/2017/12/14/16774148/net-neutrality-repea... 4) https://futurism.com/ny-attorney-general-fcc-net-neutrality/


'alt-right' Twitter just sounds like a subculture on Twitter, not a separate network called Gab.

take note journalists... or take note devs that happen to work in open office plans next to the writers. lets not refer to things off of twitter as analogous to subcultures of twitter.


GL;HF Microsoft Azure, attempting to force your customers' content policy (in cases for which Azure is clearly not liable) is one more excellent reason not to buy Azure, right around only supporting IPv6 on load balancers (not directly on hosts).

Though on the flip side, I think some of these posts could violate Gab's own ToS, so even if their OP didn't remove them voluntarily, they would likely not have survived long anyway.


Slippery slope, if I've ever seen one. This can and will be used against legitimate dissidents and protesters at some point, that I can pretty much guarantee. Bottom line: I'd be much more comfortable if takedowns could only be done in response to illegal activity on the account, and only after a court order. Otherwise some techno-bureaucrat serves as a judge, jury, and executioner, and I'm categorically not OK with that.


This thread is a good example of the "sunlight is the best disinfectant" fallacy.

It is debunked by research. The thread has countless instances of it. In each, someone goes to the trouble of debunking it. The "exposing bad ideas reduces their power" camp is unmoved. The "the research says you're wrong" camp is unmoved.

The false notion is firmly wedged into the public mind because the research came too late. I don't know a good way to prevent this while not shutting down good but unpopular ideas. Letting Nazis, goofballs, and fascists on to all the big social media platforms has obviously not reduced their numbers.


Can you source that? Otherwise I can't tell if I'm arguing with you, your source, or a straw man.


No one asked for a citation on the widespread claim that exposing bad ideas to the light makes them go away, or at least decline in popularity. It's stated and taken as truth, but no one provided proof that it's true. I matched the standard of evidence applied to the original claim.

It might be wise to actually investigate that bit of "common sense" before betting the future of the world on people's ability to deal with a flood of misinformation. Maybe I'm wrong. No one bothered to back it up with evidence though. Start there, not with my response to the claim.

You could wisely point out I claimed there's research, and I could at least provide it. The research I've seen tells me this won't help since people tend to discount research that goes against their beliefs, and use research that supports those beliefs as reinforcement. It won't do you any good if you start out with a strong belief in the unsupported original claim, as seems to be the case with most people I see here making it.


> You could wisely point out I claimed there's research, and I could at least provide it. The research I've seen tells me this won't help since people tend to discount research that goes against their beliefs, and use research that supports those beliefs as reinforcement. It won't do you any good if you start out with a strong belief in the unsupported original claim, as seems to be the case with most people I see here making it.

Come on man, I think I'm on your side here, but this is a cop-out. If you're not sure where to find the studies, just say so -- otherwise, why hold them back at this point? (Surely you don't hold the extreme view that everyone is completely impervious to evidence, so you can't just say that the evidence proves itself not to be worth sharing.)

I too remember reading about evidence contradicting the 'sunlight is the best disinfectant' meme -- for example, studies demonstrating that false claims stick with people and continue to affect their thinking, even to the point of being held as beliefs, after they are convincingly refuted (or perhaps even withdrawn by the person who presented the original claim, who admits that they completely made it up). If I have time later, I'll see if I can find some good sources to link to.



I didn't ask for sources on those because they looked believable. Worse, I hadn't seen support for banning until your post. You were the bastion for discussion supporting Microsoft's pressure on Gab. I want to believe your critique and take your side (it's a very popular one online). The article (unless I missed this) wasn't making either claim as superior, it was reporting the news. While many people discount evidence, I hoped to have a starting point for research and looked for support of banning from you. I thought you may be able to provide me sources to guide me towards your way of thinking. You didn't and self justified by fighting the straw man you built. I'd like to believe you, however your argument is self defeating.

Instead it feels like you've belittled me for not already taking your side. You're withholding the research you have and I'm supposed to trust you based on this discussion? How?

I tried to do research, What I found: 1) I don't know thy right search terms to find articles advocating banning users to solve the problem.

2) Articles discussing the problems of a lack of formal research: http://gwdspace.wrlc.org:8180/jspui/bitstream/1961/8566/4/A4...

A defense of exposing behaviors as a means of change: https://heinonline.org/hol-cgi-bin/get_pdf.cgi?handle=hein.j...

What I've gleamed: 1) A policy of open communication is useful in groups 2) Exposing problems can lead to meaningful change 3) Open communication in a company can help build a powerful culture

As this pertains to online forms: nothing it it's taking about discussion. Abd I failed to locate good articles on that subject.

Researching Gab: the posts from Gab was immediately followed by a funding request. Users pointing out how it appears to be "Twitter for racists" add (cherry picked) posts highlighting the advocation of destruction if Jewish memorials (and enslavement of the Jewish people).

To comment on the article: this appears to be shining the sun on what appears a racist bastion advocating violence. (I might be wrong. Gab didn't remove the posts, the user did) Or to rephrase Microsoft's threat as I'm seeing it: if [Gab] does not remove a post [advocating violence] , Gab must seek hosting on another platform.

So which strategy worked? Ban or exposure? For Microsoft? For Gab? For the user?

I don't know that the answer is the same for each group. I suspect given the article, exposure pushed Microsoft to push for a ban. A ban pressured the user, by thinking their platform. So where does Gab stand?


We are living in a new century of censorship. BigTech companies have become so big, that they are the one deciding what to allow or not allow based on some weird guidelines ("Hate speech" means everything and nothing). It became extremely difficult to put an opinion out there and not relying to at least one of them at some level.

This weird censorship is perfectly legal since it's coming from private entities. What really annoys me is that it's always the same side that is "censored". All the FAANG//MAGA being extremely left leaning.

There is going to be a backslash. It is mounting, and people start to realize.


> What really annoys me is that it's always the same side that is "censored"

Untrue. Religious extremist content has been silently taken down for years.

People advocating for workers rights, anarchists and communists have all had a history of facing censorship online.


I don't know, somehow along the way, videos of beheadings and other cruel content became ok and simple text/opinions can now put entire sites at risk. I am not familiar with the content of the messages, but from now on it's enough to get certain ideas labeled same as these, then it will have to be deleted. The number of labels will increase, and so will the reporting and the deleting.


The letter is encouraging in that it at least softly implies Microsoft applies a First Amendment analysis to take down issues. A First Amendment analysis is much superior and more just compared to the injustice applied to Alex Jones and any number of 10000s of removed youtube videos and so on. At the end of the day, tech companies need to realize they are just common carrier pipes and do not have authority to police the bits on their servers anymore than a city bus has the authority to police the politics of its passengers.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: