Hacker News new | past | comments | ask | show | jobs | submit login

Musk and everyone else is right that we do need a platform with free speech. The only problem is that all those free speech advocates are actually NIMBY's when it comes to free speech.

I have been following self proclaimed free speech absolutists(because I too, believe in free speech but don't believe it exists) and they are totally not the kind of people that say "I hate what you say but I will die defending your right to say it". In all places, these people are curating comments and posts to push agenda.

The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

Yet again, I like that Musk and Kanye kind of people claim that they want free speech because at least we can hold them responsible when they don't deliver it. This is in contrast with the pure fascist where they cannot be held responsible for anything because they don't claim virtue in first place. It's a bit like companies doing greenwashing, which can be exposed when they don't deliver on their claimed virtues versus companies who don't even claim such virtues and instead pretend that it doesn't matter. Those who claim virtue are better even if they ultimately fail.




I used to be free speech absolutist, but I am not any longer, especially when it comes to social media.

The argument in favor of absolute free speech for me was basically “let everyone hear everything and make up their own mind”. This presumes that people are swayed by the content of an argument. This is a false assumption, people are mostly swayed by the volume of the argument. This is well documented in psychological research. Now, if everyone had the same level of visibility for their personal speech this would just lead to an ersatz version of opinion democracy, where the most often held opinions would rise to the top, which wouldn’t be a bad thing.

But people don’t have equal visibility. The reach of a wealthy or famous person is so much greater that in the political arena basically only the speech of the wealthy and famous ends up having enough volume to convince people, even if it starts out wildly unpopular and even if it is objectively false. Social media are especially sensitive to this thanks to the ability to buy access to views without the viewers even realizing, to micro-target audiences, and to have zero independent vetting of what is said. This then perverts absolute free speech into a weapon used by the powerful to deceive and subvert democracies.

That’s why I think that to protect democracies we must have some limits on the ability to get speech amplification through (social) media, but I don’t have a hard and fast rule for what that should look like. It is far easier to say “let everything pass” but that is the easy way out and ultimately bad.


Same here. The notion of "free speech" was one of the most successful and liberating memes (in the original sense of the word) in human history. But with the advent of technology, overflow attacks on free speech make unrestricted speech as useless as no speech.

It's like living in darkness, and then someone invents light, and everyone cries "more light", and it's great, and then after a while the light gets so bright that it's blinding, making the light useless for its original purpose of letting you see things, and yet we still cry "more light" because we're afraid of going back to the darkness.

I don't know what new thing to replace the rallying cry if "free speech" with. Something about signal-to-noise ratio, but all the alternatives involve trusting people to moderate, which is obviously an undesirable property compared to the original concept, but I think it might be simply unavoidable. At a high enough level, free speech itself can be used to eliminate free speech.


To borrow from Popehat:

> If you block people on Twitter you’re not truly open to different arguments or ideas. Similarly if you were truly open to trying new and different foods, you’d eat this hot dog I found in the gutter.

I think in the context of social media the replacement/adjunct rallying cry is "free association", i.e. moderation. I don't have to engage with racist nonsense or the people who produce it.

How exactly that's done is certainly an area for competition/innovation between the social networks, but ultimately the ability to not have to hear some categories of speech is the answer.


But then we get into the balkanization of our society with increasing polarization and extremism, no?


Before social media did anyone read every book ever published? Did anyone read all the rejected manuscripts to avoid the censorious hand of the publishing houses? Of course not, we accepted that someone (editors) were doing some first pass quality check and even then we pick what areas are interesting.

There's two related but distinct problems: the moderation problem and the village idiots problem[0]. Polarization _can_ come from moderation, but there's also a whole debate to be had about what is driving what. For example: Alex Jones' whole saga has been spun by some as "being punished for conservative beliefs", so yeah, I guess if he's a conservative then him being pushed off social might cause polarization. BUT I think it's important to note that 10 years ago if you said Jones was a conservative, almost _all_ conservatives would have said something "the interdimensional vampire guy? Don't lump us in with that crazy bastard". During the intervening years right wing leaders have increasingly signaled that Jones is one of theirs. That was a top down series of decisions more than social media's impact. In order to believe that "your team" is being punished you already had to believe that Jones was on your team. If the statement "Alex Jones is on my team and I'm on a mainstream political team" is true, then you're _already_ polarized. The moderation might make it worse but something severely fucked up has already happened.

The (potentially violent) extremism, though, is really about the idiots getting together and self reinforcing (for example incel groups periodically spinning out a mass shooter). Moderation isn't really going to impact the second problem since when they get booted from one platform they migrate to a less moderated one or spin up their own.

[0] Borrowing Peter Singer's framing from here: "Once, every village had an idiot. It took the internet to bring them all together."


I think you may have misinterpreted what I was saying. I'm not arguing against moderation, I was saying I believe the self-reinforcing bubbles of social media on e.g. Facebook and Reddit have been a big driver of polarization and extremism.

Before social media, most people didn't get their info from books, they got it from TV, and you had a couple big channels that essentially led to most people having some sort of consensus on the few versions of reality that were broadcast by the media.

Whether that was a good thing or a bad thing is another discussion, but at least we didn't have the degree of balkanization and polarization we do now.

I was saying that having social media function as is, but doubling down on tools to help people screen out what they don't like, which is what the person I was responding to suggested, would, I think, just accelerate that balkanization. So I don't know that it's a good solution.


Free speech to me is not going to jail for saying you think Hitler is a swell guy or you hate the president. It has nothing to do with protected algorithmic amplification of hate speech which is what a lot of bad actors are clinging to it for.


It's complicated - that's Free Speech as a right, but Free Speech as a virtue has a history in liberal thought that goes deeper than just protection from the government - most notably, Mill in On Liberty. There's an unfortunate but understandable tendency to conflate these two things.


It gets further complicated so that if you tell a joke in poor taste or in haste without considering the future and other implications you can get retroactively "cancelled".

So today you say something that is acceptable. But maybe tomorrow, after you turn 18, someone discovers your statement and they cancel you using today's judgements.


The solution is for the metaphorical adults in the room to stand up and proclaim "cool story; we don't care" when someone comes knocking at their door with evidence of misdoings of one of their employees. Just claim it's a faked screenshot and your internal review processes do not act on false information.


I’m not really conflating them here. The bad actors argue that having access to algorithmic amplification is a right. As an aside, how do we fit bots into JSM’s framework?


Exactly! Free speech is to protect you from being jailed or executed by the state for publicly held opinions. It has absolutely nothing to do with twitter, and I believe anyone arguing that it does is arguing in bad faith or out of ignorance to the actual purpose of the free speech clause of the first amendment.


You have this completely backwards. The first amendment is the US' constitutional protection of free speech. Free speech itself is an inalienable right. You would have the right to free speech regardless of whether or not your government protects it (which many don't). Governments do not grant rights.

Free speech on Twitter is a matter of values. It is not a matter of whether or not Twitter is legally liable to protect free speech (they're not) but whether they should protect it because it's something that is worthwhile protecting.

Given the ubiquity of social media and its current massive role in communicating and share ideas, what role should the companies behind these services play?


If you have a right and the government isn't protecting it, do you really have a right? Sure you can get all philosophical and say things like every soul has a right to X Y and Z, but that doesn't mean anything in practice outside of the ivory tower if the government you are beholden to has a stance to the contrary.

OTOH if its only about values and not about the actual legal idea of freedom of speech, then you can argue with that logic that there is also a moral value in protecting groups of individuals from being the subject of vitriol and hate speech on a forum you own. That's the position Twitter et al. have taken in this regard.


> If you have a right and the government isn't protecting it, do you really have a right?

Yes, but only to the extent that you're capable of protecting it yourself. This is why the second amendment exists in the United States. I don't really care to get into whether or not this a valid point of view since that could be its entire own discussion, but that is at least partially the rationale behind protecting people's rights to procure weaponry.

> Sure you can get all philosophical and say things like every soul has a right to X Y and Z, but that doesn't mean anything in practice outside of the ivory tower if the government you are beholden to has a stance to the contrary.

I get what you're saying but unless the government does some minority report type thing where they arrest you before you exercise your rights, most people will still get to in the real world exercise it at least once. A person doesn't lose their right to free speech just because they are dumb or otherwise incapable of communicating their speech, either.

> OTOH if its only about values and not about the actual legal idea of freedom of speech, then you can argue with that logic that there is also a moral value in protecting groups of individuals from being the subject of vitriol and hate speech on a forum you own. That's the position Twitter et al. have taken in this regard.

This is in fact where I think the most interesting discussion can occur. What values should social media platforms be enforcing? I personally think that censoring speech broadly on the platform is in most cases inappropriate — Twitter and the like can make tools to help people insulate themselves from people they don't wish to see or interact with. Some of these already exist, but they could expand them. They could even create features that allow users to preemptively take action on types of speech they find objectionable (advanced filtering techniques).

I find this preferable because it allows the broader community to maintain discourse (even if some people find it abhorrent) and importantly grants individuals agency over the type of speech they engage with.


This conflict isn't just about people's feelings being hurt, which is what having the ability to enter a bubble where you don't hear anything that would offend you would protect against.

It's bigger than that - what if these ideas become popular and we elect a leader whose primary drive is to go "death con 3 on the jews"?


This is how I look at it as well. The government can't come knocking because I have opinions. It doesn't mean I get to espouse those opinions anywhere I please (hotel lobby, shopping mall, concert, stadium) where it becomes a public disturbance. I'm free to write about whatever my opinions are but I'm not free to force someone to publish them.


So, in other words, you liked free speech until free speech became more prevalent when it became available to the masses via technology?

Part of accepting free speech is being tolerant of speech you may find offensive.


To those who espouse the idea that comments should be filtered for the greater good, I say 'You first.'

There was a time period when the left was for free speech and the right was wanting to constrain it. Maybe its just a giant pendulum - there is no right/left difference when it comes to free speech - everyone wants to censor / filter the speech of the opposite side.

If things come in cycles, then I expect the right to take over more and more (see the european shift) and then for them to slowly become in favor of censorship. Maybe then - if we are lucky - the left will remember that censorship is always the enemy even if it helps them currently.


> To those who espouse the idea that comments should be filtered for the greater good, I say 'You first.'

that's part of the reason why I come to HN, for well-moderated (or "censored" if you prefer) discussions on tech


This is truly one of the best moderated places on the internet IMO, and one of the only places after 20 years of posting on message boards that I can go without hurtling toward a flame war every time I log on nowadays. Part of it is, I'm opinionated and I think probably enjoy arguing more than might be reasonable sometimes. Another part though is that the moderation quality is so high here and so low on places like Reddit + Twitter. My 2 pence.

Probably the lack of ability to advertise here the way you are encouraged to do on Twitter/FB/IG/Reddit (because they need that revenue) factors in as well.


One of the things I appreciate about HN is that I can still read the posts that are removed (with showdead enabled).

I don't care much about my freedom to speak. My voice will never be a significant influence on the world anyway. What I care about is my freedom to listen. I value the freedom to review all the evidence and all the arguments then draw my own conclusions.

I don't often want to read dead comments. They are mostly low quality and deserve their status. Nevertheless, showdead is sometimes quite useful and the transparency gives me more faith in the moderation.


> everyone wants to censor / filter the speech of the opposite side.

This is mostly because the left/right spectrum is too nebulous to be genuinely useful at understanding most people's values, which tend to be more nuanced than a one dimensional spectrum allows for. People that want to censor/filter speech are authoritarians. Nothing about authoritarianism uniquely binds it to the left or the right.


The right is still for censorship, but selectively just for the things they want censored. There’s no pendulum, just an explosion in hypocrisy. The left used to rely on goodwill and ethical human behavior to do their “censorship” for them, but we’ve lost that at this point and people don’t care if they’re perceived as evil anymore because they have a large enough mutual admiration club now.


> There was a time period when the left was for free speech and the right was wanting to constrain it.

When was this - and can you give examples of left and right acting the way you described in the past?


There are a lot of examples of this and the left has had some truly great advocates for free speech. In terms of time period, the Red Scare and Mccarthyism was a time when the left was being heavily censored by the right. The Civil Rights movements as well with MLK during the 1960s and then Frank Kameny in the 1970s trying to get rights for gays.

Other leftist advocates for free speech include Obama, Elenor Roosevelt, and Aryeh Neier are brilliant examples.


What counts as "free speech" tends to be subjective: was MLK pro-free-speech or against it? That depends on whose speech you're considering. I can give even earlier counter-examples with left/right flipped (e.g. abolitionist literature in the south).

My initial point wasn't that it never happened, only to show there were never deliberate, strategic positions on free speech by the left or right- only messy tactical circumstances. Not long before McCarthyism was Japanese internment by a giant of the left: FDR.

Obama famously called someone a "jack-ass" after they exercised their free speech on-stage. He also railed against the Citizens United ruling. Having a binary "for/against free speech" is reductive.


> Obama famously called someone a "jack-ass" after they exercised their free speech on-stage.

I thought that a strange comment. Disapproving of what one says is clearly not the same as condemning free speech.


So why do you think Obama does not have a right to free speech?


https://en.wikipedia.org/wiki/Free_Speech_Movement

Mario's "Operation of the Machine" speech is pretty good.

Also worth noting, often these groups were not quite as egalitarian as they're thought to be. SDS for instance had quite a bit of sexism in its operations: https://en.wikipedia.org/wiki/Students_for_a_Democratic_Soci...

I have no position on this debate, I just thought you'd want a little context.


Fortunately there are other dimensions.


> This presumes that people are swayed by the content of an argument.

Freedom is a good in and of itself. Our rights don't need to serve a larger purpose.

Imagine asking for permission to read a book and being asked, "but what good would you reading this book do for society?" The answer of corse is that it doesn't matter -- our civil rights are not transactional -- they do not exist to serve others.


> This presumes that people are swayed by the content of an argument. This is a false assumption, people are mostly swayed by the volume of the argument.

That's only half the story. The other half is tone. I have been persuaded against several beliefs that should have won me over if volume were the only consideration due to the quality of the writing. "These people type like morons, it's probably a belief primarily found amidst the stupid", as it were.


There is a very easy theoretical solution to this, discourage platforms beyond a certain size and incentivize topical groups instead of geography. It's just hard to make these kind of regulations at this point, now that everyone got hooked on ad money and data mining.


The concern about the algo can clearly be mitigated. Eg here on HN there is no personalized feed concept, and that prevents one from entering a thought bubble.

It’s not completely free speech here, but seems close and mostly pretty good results follow.


> limits on the ability to get speech amplification

Well, you're the only person I've ever seen suggest that social media distribution be limited by author rather than viewpoint. Although I disagree, I'm not quite sure how that could be managed, either.


> perverts absolute free speech into a weapon used by the powerful to deceive and subvert democracies.

Controls on speech get perverted far worse and far faster, every single time. There is no perfect system where we can make everyone infinitely wise.


I agree, mostly. I propose methods to address these shortcomings instead of limiting speech.


I agree with this but then you don't have free speech right?

I assume you are referring to something like defamation but controlled by the state


No, I refer to things like attaching counter opinions to opinions of people with high visibility for example. So if the concern is the power of the famous, never display their tweets alone, display it with a few other tweets.

Maybe do issue follow ups, so if someone says something and later it is contested prioritise the contestants until they get similar reach. For example, if a politician says he never met with someone and a photo of them together is revealed make sure that their claim is displayed together with the new photo.

Things like this.


> I refer to things like attaching counter opinions to opinions of people with high visibility for example

That's what fact-checking is. It's widely heckled.


The problem with fact checking is the presumption of authority over the truth. I don't suggest fact checking, I suggest equal exposure to contesting ideas.

I guess NASA tweets might receive pairs who claim that the Earth is not a globe :) That's OK, NASA can respond to these with equal visibility and if people are not convinced I guess NASA would need more convincing arguments.


All your "free thinkers" that are browsing these posts for 5 minutes while they take a dump won't be taken in by the mere stamp of authoritativeness on the fact-check posts, right? I mean, obviously all users are able to make good judgments and competently weigh all the facts on every topic. Why are you so worried? What makes a fact check post more authoritative than NASA?

Btw I'm not advocating for active suppression of ideas. I just understand if a particular company chooses to do it on their website. I'd do the same in their place. It's not their job to give everyone a voice.


Is saying something is widely heckled similar to when someone says "we all know.." before making a controversial statement?


I wasn't aware "many prominent people don't like fact-checking" was a statement that needed a citation. In any case, you're free to disagree with that. I don't really care enough to try to prove it to you.


According to a Pew Research poll "do fact checkers favor tend to favor one political side"

It's split down the middle

https://www.pewresearch.org/fact-tank/2019/06/27/republicans...

As to prominent people- What do you consider prominent, would most people agree with that, and do you have a poll of these people.

Saying most prominent people blah blah blah is debatable on two levels


Your own link says that nearly half of all Americans and 70% of Republicans think fact-checking is biased. That's the attitude I was referencing when I said that fact-checking is "widely heckled". If half of an audience boos you, that's a lot of booing.

I have no idea what you're arguing. Your own link says what I said.

I'm not saying anything about the fact-checking itself. I'm not on Twitter or Facebook. I haven't seen any fact-check posts. I'm sure they try their best to be accurate. I prefer to get my information about the most hoax-prone topics from authoritative sources - primary sources, news agencies, newspapers of record - the more boring, the better.


I like this notion.

However, some sort of "fair & balanced" law would have to enforce this.

Edit: and to respond to sibling comment about fact-checking being heckled...

The mechanism here would have to somehow force a similar amount of views. For example, if a lie gets 1MM views, then the proof of the lie should have to gain 1MM views before the original author can gain leverage of the algorithm again.

Of course the new system will eventually be abused, however, it's a step in the right direction. And when that eventually fails to be recognizable, another set of checks and balances must be layered on top.


We had that in broadcasting; it was an FCC rule called the Fairness Doctrine. Reagan dismantled it, and that directly led to the extremist radio empires that fuel a lot of the misinformation online today.


I wonder if anyone believes their own views are too dangerous for broad distribution, and should be limited to protect demoocracy.


Your last clause makes this beg the question, I think.

A lot of people believe their own views are dangerous for democracy, and limited to protect democracy. They just also don't believe in protecting democracy - sometimes explicitly, sometimes with lip service to a "democracy" that's little more than nationalism.


Outside of a few teenagers flirting with monarchism, the number of people who don't support democratic republics are vanishingly tiny enough that most references to them are actually straw man arguments.


this is not an argument against absolute free speech, its an argument against social media and discourse being controlled by algorithms


exactly. We need to go back to public discourse being carefully controlled by a select few.


Why do we need an absolute free speech platform? What good does it do?

> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

Sounds like a hint to me. There are more free speech sites, e.g. saidit.net. I would wholeheartedly recommend staying away from it: it's a cesspool, like the other reddit wannabes. Voat also comes to mind. Freedom of speech on such sites only serves to say the worst of the worst, and that will predictably include escalating aggression towards other users.


> Why do we need an absolute free speech platform? What good does it do?

This is exactly the right question to ask. I'm convinced that it's not possible to have constructive "free speech" social media platform. There's always the need for moderation.


I agree, but I'd like to play with what "moderation" means. A great example of when moderation fails / is abused is Reddit, or the big socials like IG. The bots can be overly sensitive / have lots of false positives, and the individuals in charge of moderation are not accountable to anyone (except maybe advertisers, indirectly).

I would like to see a platform where moderation exists, but it's "opt-in" only. Meaning, the mods / bots can tag / categorize user posts, and other users can control the visibility of tagged material. This way everything -- the most vile, twisted, hateful and disturbing things are still permitted a place to exist, but they're effectively shadowbanned by individual choice. Start with some sane defaults, and allow people to peel back the lid on the box of horrors if they want to.

This could work with age-restrictions (users below a certain age cannot see certain tags) as well as satisfy advertisers that their ads are shown next to the most innoffensive, oatmeal-bland content (they choose tags next to which their ads are never shown).


I think moderation should be about the words that are being said, not the ideas that are being discussed.

A free speech platform should allow a wide range of topics, but it's not expected to stand for all manner of trolling and bad faith argumentation. I think that conflating the two is tripping a lot of people up in the debate about the topic.


In the end you end up with the same problem. All the "exterminate the jews" types go to the free speech platforms at which point everyone else leaves, even if the people with the ideas that aren't liked are using respectful language. It's not just the bad faith and trolling that make people want to leave the site, it's the base level ideas of the people who have been moderated off other platforms.


> The moral of the story is: if you’re against witch-hunts, and you promise to found your own little utopian community where witch-hunts will never happen, your new society will end up consisting of approximately three principled civil libertarians and seven zillion witches. It will be a terrible place to live even if witch-hunts are genuinely wrong.

https://slatestarcodex.com/2017/05/01/neutral-vs-conservativ...


I think that a platform where people expect the ugly ideas to be debated (in good faith) will have the users that are willing to do that[1].

Not every platform needs to have _all_ the users. I know that it's a bit of an anathema on a discussion board built by venture capitalists to say that the goal of a social platform should not be to maximize the amount of users and engagement, but here we are. I think optimizing your service for "everyone" is a bad strategy in competing with existing social networks, especially coming from an "indie" background. Not that Parler is exactly indie.

[1] I'm saying this as someone that is working towards a discussion platform that targets smallish to medium communities formed around a common interest. In this world if moms wanting to share their latest knitting project are excluded from a service that targets free speech people, that's fine, there can be a knitting community out there also for them. Having these two communities intermingle by using something like ActivityPub is a way to keep "the network effect" but keep them separate enough.


> I think that a platform where people expect the ugly ideas to be debated (in good faith) will have the users that are willing to do that[1].

This does not actually...happen. At least not over the medium and long term. What actually happens, and you can see this in practice, is that decent people are not particularly interested, over long periods of time, in arguing that no, there is no globalist (read: Jewish) conspiracy to take over the world. They lose interest almost immediately, while the frothers intellectually crossbreed and turn from one particular flavor of bigot into all the flavors of bigot.

The problem isn't, as you are characterizing, that a platform must have "all the users". The problem is that this strategy hyperconcentrates relatively anodyne conservatives into literal-not-figurative fascists, and has been doing so for quite a while. The active creation of intellectual cul-de-sacs, of epistemic closures for hateful beliefs, is a major factor in why we're where we are right now.


I disagree with you. I think that the phenomenon that you described (which exists on most social platforms that are advertising themselves as "free speech") is not present everywhere and my impression is that the problem is exactly with the "chase all the users" mentality.

One example that I can think of the top of my head is Scott Alexander's blog, where I saw opinions put forward (most of the times in a respectful manner) that ranged from extremely egalitarian to extremely libertarian. I am entirely sure that some of the people posting there have views that veer into "one flavour of bigot" or another, but because the community as a whole would rise against the most objectionable types of ideas that one could put forward, they never do it. To me that is a healthy community and I hope it can be achieved in other places without needing an "alpha-personality" at the center for people to gather around.


If I think jews control the world and I calmly discuss it, present circumstantial evidence, etc would that be acceptable?

Sure I'm using offense terms but that's not as bad as claiming they control the world.

Or if I thought slavery should be brought back but I don't use the n word. Is that really the issue?


I personally would dismiss you as a lunatic and racist in both cases and move on with my day. However I see no reason why you shouldn't be able to make a fool of yourself if you so choose.

Making you feel like a martyr because you are being "censored" is worse in my opinion than allowing you to express your points of view and hopefully be receptive to counter arguments.


What of people that read hypothetical posts like mine and decide to shoot up a synagogue.

https://www.wired.com/story/pittsburgh-synagogue-shooting-ga...


I think you're trivializing the issue quite a bit. But yes, I dislike the paternalism of considering everyone else on the internet stupid and incapable to making informed decisions when facing questionable points of view.

I'm not qualified to speak with any authority about this issue, but my opinion is that people that are willing to shoot other people most likely have other incentives than reading a singular's dude online hate ramblings. The problem lies with the fact that they gets ostracized and _all_ they are able to read are the hateful things. If you go through the thread above, you'll see that my stance is the complete opposite of that: let's allow people say the "bad" things and balance them out with other peoples' "good" things.

This theoretically would ensure that this person is not exposed to only hate and negativity, and will hopefully make a better decision than ending others' life and their own.

Forcing this unbalanced individual to retreat into a corner of the internet where his opinion on other people goes unchallenged is unquestionably A BAD THING, and I doubt I'll change my mind on this fact any time soon.


"This theoretically would ensure that this person is not exposed to only hate and negativity, and will hopefully make a better decision" ....

"and I doubt I'll change my mind on this fact any time soon"

You have high hopes of people changing their mind when presented with new information, except for yourself apparently


I will not change my mind about this one thing, but I change my mind all the time about other things when I'm being presented information that challenges my point of view. It's a bit childish to veer into ad-hominem and take my words out of context just to try and keep the upper hand. :P


You set yourself up for that by declaring people to be open minded but then saying you couldn't have your mind changed.

More importantly I don't think most people over a certain age change.


they get arrested. The amount of violence that could be attributed to this sort of thing is so minuscule its barely a rounding error in overall figures. Just like school shootings it's extremely publicized but when it comes to the actual numbers it's nothing.

On another note I'd go as far as to say that prohibiting it will make them even more radical and entrenched in their beliefs, germany has extremely strict anti-nazi laws and yet never stopped having neo-nazis, much to the contrary[0]. The people who are going to go as far as real life actions will find the daily stormer or whatever other website and now he will feel like a martyr and justified of some conspiracy or whatever.

[0]:https://www.theguardian.com/world/2022/apr/06/german-police-...


So how does that work when a hip-hop artist says the n-word vs a neonazi says the n-word?


I disagree. There are limits to what ideas constitute free speech in many modern countries. As an extreme example, an idea that puts forward genocide as acceptable form of action should never be allowed under "free speech", even if it's said with nice words.

This and other examples are ruled under law in many developed countries.


Then it's not free speech. The whole point is that there are not restrictions. If it's restricted, it is by definition not free.

"Free speech" is a cool buzzword people think they can qualify for (or wish to), without the ramifications of true free speech (hurt feelings, bad ideologies being discussed in a positive light).


There are two definitions of free speech going around the internet discussion boards these days it seems. One is the legal one that has existed in our country since it was penned in the constitution, which protects you from government opression from publicly held opinions. That doesn't mean you can say whatever and expect no recourse from anyone, you have no protections from being kicked out of a private place or fired from your employer under this law, just that the State will not put you in jail or kill you over these words like other states around the world do for words. The other view is that you are allowed to say whatever you like on platforms like twitter and should not be banned. It has nothing to do with twitter. Twitter is not part of the State. People making it about twitter are missing the significance of the first amendment and what society looks like in places without protections on speech and religion from the State.


I'm not advocating for getting the government involved in forcing Twitter to allow all speech. I'm saying people need to stop continuing to use platforms that actively and publicly censor things on a daily basis, as if it's okay and normal now.

Fighting corporate-enacted censorship with government intervention is fighting fire with fire.


Freedom of speech is not a buzzword, it has a pretty good definition in the declaration of human rights and on Wikipedia:

> Freedom of speech and expression, therefore, may not be recognized as being absolute, and common limitations or boundaries to freedom of speech relate to libel, slander, obscenity, pornography, sedition, incitement, fighting words, hate speech, classified information, copyright violation, trade secrets, food labeling, non-disclosure agreements, the right to privacy, dignity, the right to be forgotten, public security, and perjury.

The fact that most people on the internet (which seem to include you) are using it wrong is another thing. Free speech only applies in the relationship between citizens and the state. It has no meaning in the relationship between individuals and the platforms they're using for communication.


You're right. Companies have the right to censor things they don't like on their platforms. That's why people should stop using platforms that are frequently censored if they really care about "free speech." Just like how people can't "free loiter" on my personal property if I want them out of it.

I don't care about an arbitrary definition of two strung-together words, whose definitions individually, are absolute. When combined, their definition is just as absolute. The speech must be free. Free is simply defined as free. Not "free, but ..." in which case it is no longer just "free speech."


> I don't care about an arbitrary definition of two strung-together words, whose definitions individually, are absolute. When combined, their definition is just as absolute.

This feels like a deeper debate than I'm capable of having, but all language is a string of strung-together words with meanings. These meanings have reached a high enough degree of consensus to exist in a dictionary or semiotic treatise. I think that clinging to your own meaning of absolute free speech when faced with not an arbitrary definition, but one which was reached through a social and cultural consensus, is naive or willfully contrarian.


> Free speech only applies in the relationship between citizens and the state.

That is a pretty silly definition.

Imagine if a corporate owned mafia was going around murdering everyone who supports increasing taxes.

Surely, you would recognize that this has a chilling effect on speech, and could be said to control people's free speech rights, even though it is not the government doing it.


> an idea that puts forward genocide as acceptable form of action should never be allowed under "free speech"

See I have a problem with the word "never". How about "rarely" or at least "once". A terrible idea should be given an audience once. Let it it be quickly refuted, then go back to better conversations. If someone brings it up again, point them back to the earlier discussion. That way it is established why it is a bad idea.


"That way it is established why it is a bad idea." Is that how most arguments on the internet end?


In practice, almost never. Internet arguments seldom result in both sides agreeing on a single outcome. Nobody is convincing anyone else of anything on the internet (most of the time).


Sometimes. Threads are archived. Questions closed but not deleted. New questions/comments disallowed. It meets a middle ground between absolute free speech and absolute moderation.


What I meant was people don't normally end a discussion, especially political, with one side admitting loss and agreeing that the other way right.


Those developed countries have people taking their banned speech underground. It is usually also very illegal to take their strongest arguments and argue against them. All you have left is hope that they will never gain stronger support.


That's fine, it helps. Countries the UK also have defamation laws that are much stronger than the US.

I read that what really brought down the KKK was a massive amount of lawsuits


My words were "a wide range of topics" not "all the topics".

Personally I can think of a meaningful debate that can be had from talking about "genocide" but I'm pretty sure that people that would hold this opinion in truth are a little beyond what would be considered a "good faith" discussion.


> I'm convinced that it's not possible to have constructive

I don't know that when the rubber hits the road people are meaningfully trying to make a constructive free speech platform. The nihilism is the point.


A platform that hosted both objectionable speech and regular speech together might be tolerable to read. But the most of the regular speech ends up on popular platforms than ban objectionable speech, so the free-speech sites are left with mostly the objectionable stuff, which makes them pretty unpleasant to read.

It makes it hard to start a new platform. People start free-speech platforms with good intentions of having open debate about controversial topics. But they quickly get overrun by hate mongers and trolls, and become too noxious for most people to read. Intentional or not, it's a good strategy by the existing platforms to kick out the nasty people, ensuring that they're first to sign up for every new social network.


The hidden piece of the puzzle here is that objectionable speech pushes regular speech out.

Most users don't want to wade through toxicity to get to signal. If they're discussing a topic of interest, say baking, and someone comes in and starts ranting on how a vast global conspiracy made up of surprisingly-homogeneous ethnicity given its global scale is pushing up the price of yeast to weaken the market for white bread, either the moderators squelch that noise or people who want to talk about baking go somewhere else to do it.

Given their own freedom, when given a choice, users tend to select moderated channels over unmoderated ones. We've been doing the Internet long enough to know this to be true.


I don’t think bad speech pushes out good speech directly. Rather, it pushes out the audience, and the good speakers follow.

The end result is the same, but it’s important to understand exactly where the mechanism is failing if you want to fix it.


> Given their own freedom, when given a choice, users tend to select moderated channels over unmoderated ones. We've been doing the Internet long enough to know this to be true.

This is true, but unfortunately the same mistakes keep being made because people don't pay attention to the history of the internet or didn't grow up during that era. We've known that completely unfettered discussion leads to self destruction since the Usenet era. But the lessons aren't heeded or ignored, so we get people that either stay ignorant or learn the hard way.


4chan has hosted anything and has done so for longer than other social media sites.

Maybe toxic people just congregate in places where their speech is accepted, therefore making the rest of the site toxic as well.

Maybe it’s not “hate mongers and trolls” that overrun sites and that the concept of free speech and being able to say anything just naturally brings out the worst people and the worst in people.


When only the rejects use the sites of course it's going to be full of bad content. That doesn't mean there isn't value in free speech being better valued on mainstream platforms.


> “When only the rejects use the sites of course it's going to be full of bad content.”

The problem with this theory is that 4chan is older than both Twitter and Facebook.

If unmoderated speech created an inherently better platform, surely 4chan would have captured the market a long time ago and cut off commercial alternatives like Craigslist did.


Didn't 4chan start off largely as castaways from Something Awful? Your point is valid though. 4chan had an enormous amount of time to become the shining star of how great an absolute[1] free speech site could be, but still manages to be a cesspool. This should be a neon hint, but people keep thinking that they're going to invent the one free speech site that doesn't end up toxic.

1: Also as others point out, even 4chan moderates, however lightly.


And 4chan got much worse over time, there's no reason it had to be rejects at all. The toxicity was entirely self directed.


…or capturing the market would have flooded it in inane speech, rendering it no better (or different) than anywhere else.


When given the choice between platforms, why do you think the majority continues to congregate on the more restrictive ones?


Why are we here instead of 4chan? Why does everyone that uses email use a spam filter?

Direct, unfiltered exposure to the firehouse is at best banal, and at worst disgusting and self-destructive. It's an _awful_ job that ~no one would chose to do for themselves.


I think the answer here is complicated, but a good portion of it is closely related to 'my friends are here.'


Yeah, it's 'my friends are here' and 'the content I want to read/interact with is here'.

It's the same reason people haven't mass switched to Mastodon or other Fediverse services; because the userbase is so much smaller than the likes of Twitter that there's a good chance the people and content they care about isn't available there. Or why so many competitors to popular services fail in general, regardless of their stance on free speech. The network effect is strong, and sometimes even billions of dollars and tons of marketing can't overcome that (see Google+ for example).

Would people prefer a free speech orientated alternative? Hard to say, for the same reason as whether they'd prefer a decentralised or federated one; it's the content and users that bring people to a site or service, and the competitors to the popular ones are so much smaller and less active it isn't much of a comparison.


I think the problem lies in the impunity, not in the free speech itself. IMHO people should be allowed to say everything but they must accumulate the reputation for saying it.

If someone is a racist bigot, they shouldn't be physically restrained(deleting posts is like physically covering someones' mouth) from being bigots but they should definitely be known for it. Then it's up to the community to decide how to interact with those people. That's how we do it in real life and works pretty well.

Another thing is the amplification: people pretending to be multiple people. This is also an issue, giving wrong impression about the state of the society and must be solved.

Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but an algo to track theme developments.

IMHO if Musk manages to solve these few problems, which I think he can, a free speech social media is possible.


Please, deleting a tweet is hardly being 'physically restrained'

> Than it's up to the community to decide how t interact with those people.

Twitter is a private company, and it chooses to run it's service how it wants. The government avoids actually physically restraining racist bigots, and lets the community decide how to deal and interact with those people. Some may chose to harbor them (Parlor, 4chan, etc), and others (like twitter) may opt to not host them.

It's not a huge social injustice if you're not allowed to tweet. Feel free to go to one of the millions of other websites, or your start your own (it's easier to do this than ever!) and see who's interested in what you have to say.

> Maybe everyone exposed to something should be re-exposed to the theme once there's a new development.

You're just reinventing content moderation!


I don't accept that content deletion is a way to go. When an offensive content is deleted we lose the ability to jude it for urselves. The content must remain but be strictly attached to a persona so the persona can be "judged" rightfully. In real life, when we deal with these people we want to know what they did. It gives fidelity, unlike "the person said something that violates rule 4 section 3". We should stop pretending that we are not humans and embrace the human ways of dealing with human problems. There's nothing human in undoing speech.

And no, attaching follow up to organic content is not moderation.


I don’t see any value in spending my time judging content saying that trans people are degenerates or that black people are an inferior race. I’ve already judged those ideas in my life and don’t need to see them anymore.


Well you can judge people that say those things as people who don't deserve your respect and attention. Then don't hang around places that interact with those people, that's how we do it in real life.


> Then don't hang around places that interact with those people

We've just come full circle to why twitter choses to moderate. They don't want to keep up content that drives people away.


In an online world with no moderation it is impossible to not hang around places with these people. They can just show up unannounced to spew hate speech wherever they want.


In real life, unless you’re recorded, there isn’t a record of what you say. Moreover, people who do hear first hand what you say will recall different aspects and also forget detail over time.

This allows people to evolve and to not be beholden to something they said/thought a decade ago and no longer think.


How do you feel about HN's approach - flagged or heavily downvoted comments are invisible if you are not logged in or if you have not changed "showdead" from the default unchecked state (at which point they're rendered in a hard-to-read colour)?


I'm not the person you're responding to, but I myself prefer giving the users the moderation tools that affect only their view of the content. Users trying to save other users from posts they personally disagree with, in my opinion, can lead to echo chambers just by itself. Let me configure my account so that I can block or mute specific users or highlight keywords I can add to a list or allow users to tag posts. That way others can express their opinions in more ways than just commenting and I can use that data to determine if I want to read the existing comments or not.

I do think HN is one of the better moderation systems since this is one of the saner places on the internet and you can still configure things so you can at least see all the content, though you can't interact with it all. I would just prefer it if I was in charge of saving myself from bad opinions or whatever motivates people to down vote posts into oblivion.


Nobody wants to reinvent moderation and have a list of keywords and such that they have to maintain.


I don't like it when it's due to low score, it is an oppression of unpopular opinions.

I like it when it's about flagged posts. I have the option enabled to show these posts and I would vouch if there's something worthy in it. So spam and other BS is "removed" but I still can take a look at it and see it for myself.

Overall, I think HN is one of the best moderated online places.


So this is unusual to me, because flagging feels more open to abuse than the comment score. Indeed this very thing happened to me recently:

- user A says vaguely racist thing

- user B calls person out for racist thing

- user A cannot downvote a reply, so they flag it instead - making it disappear

So both can silence someone, but in one case many people need to disagree with someone and in the other you just need one person (or one person with an alt account if you want to go and revert a "vouch"). So if anything flagging is more prone to abuse than downvoting. I try to read greytext comments when I can and vouch whenever it looks unjust (and do something similar for downvoted posts) but from the looks of things not many people do.

edit: might as well include my example - https://news.ycombinator.com/item?id=33164001

So I'd understand if I was downvoted and called out for my confrontational response to racism. Because at that point I'd reply that actually treating this kind of casual xenophobic comment as unserious and mocking the person is the most effective counter to this kind of behavious. Getting bogged down in debating the merits or worth of any individual person or where they might "belong" is exactly what this kind of person wants to do.


That's where you judge the moderation. The moderation quality defines the community quality.

The good thing about HN is that the moderators are reachable and they do respond intelligently. Unlike AI moderation, you can send an e-mail about it and dang will respond to you and explain why something happen and you can discuss it.

I had my account restricted multiple times and restored once we got on the same page(I don't agree with everything but once I see their point, I can work with it). I had wrongfully flagged comments unflagged by sending them an e-mail too.

It's not perfect but it's pretty good and miles ahead than anything else online.


So in my case the comment didn't desperately need unflagging - someone could wave the comment guidelines in my face and I'd probably concede that such an open confrontation broke at least one. But yeah I guess you can overturn a flagging more easily than being downvoted.


The real key is that there's a moderator, and the community is small enough that he can check things manually.

Once it gets too big for that, you're doomed to destruction eventually.

My preferred solution would be to break up the communities once they're too big, instead of trying to make a massive world-wide community like Twitter does. Reddit somewhat has this, but there is still a site-wide issue.


>Once it gets too big for that, you're doomed to destruction eventually.

>My preferred solution would be to break up the communities once they're too big, instead of trying to make a massive world-wide community like Twitter does.

I agree with this. In real life situations you can see it too, the larger the crowd the stupider their total behaviour becomes. Large crowds are good for certain things though, but mostly primal stuff like singing and chanting.


> Maybe everyone exposed to something should be re-exposed to the theme once there's a new development.

This doesn't work. Show people two articles, one that is false and one that is true, and most people will say the one that aligns with their priors is true. We need to either teach people to recognize fake news, censor fake news, or accept that basically everyone will believe false propaganda. There are no other options. Once someone has been shown an article they agree with telling them the article was false just leads them to think you're on "the other side".


If you took perfectly rational people and ran the same test, you'd get the same result. Evidence that supports a position that you already have evidence for is more likely to be true. If someone showed me very compelling evidence that the world was flat, even if I was unable to find any issue with it, I still would believe that it is false. If a single counterpoint could change your belief, you never had any business claiming it as a belief in the first place.

Do you have cause to believe that repeated exposure to every side of every story won't lead the average person towards truth?


And who is going to decide what those "fake news" are to censor and how will you assume they won't fall into the exact same trap of wanting to believe in what they already agree with.

We're hot off the heels of hunter biden, surely that should be a wakeup call regarding how "misinformation experts" go both ways.


No clue. All I know is that if we don't censor fake news people will believe it, no matter how much evidence to the contrary they are shown. Maybe we just have to accept that.


> Another thing is the amplification: people pretending to be multiple people.

For a free speech absolutist curtailing this could also be seen as removing free speech.

> Lastly, we need some kind of spread management. We have the problem of BS getting huge traction and the correction getting no traction. Maybe everyone exposed to something should be re-exposed to the theme once there's a new development. For example, when people share someone's photo as a suspect and it turns out that the person in the photo is not the suspect, the platform can say "remember this Tweet? Yeah, there are some doubts about it. Just letting you know". The implementation of it wouldn't need a ministry of truth but a algo to track theme developments.

Still this wouldn't solve the issue with spread of BS, specially targeted BS: it is tailored to invoke and reinforce inherent biases and, on average, someone exposed to it will become less inclined to read/critically judge any rebuttal. Bullshit spreads much easier than well researched rebuttals, just by the nature of bullshit. It's a game where truth is bound to lose, no matter how many "algorithms" you implement to spread developments of a story to the same audience, the engagement of said audience to the rebuttals will vary depending on their biases. I'm not even including the required inherent drive and energy to actually follow-up, as an audience, on further developments, in the fast-paced world of social media people will selectively choose what to invest their energy into. Someone falling for bullshits won't want their effort to be thrown out by rebuttals and so will avoid such activities perceived as a waste of energy, after you formed an opinion it's much harder to un-form it.

I'm strictly in the camp that absolute free speech on social media is a fool's errand, at least in 2022. There is no upside to the massive downsides that we already see and experience, even in the scope of not existing with absolute free speech.

The detachment on social media between the written words vs the real humans behind those words causes a non-insignificant amount of grief that wouldn't happen in a in-person interaction. It seems that we humans easily lose our humanity when not in a real world social environment, the vileness is exaggerated while empathy is easily pushed aside.


I agree that we can't have a perfect solution but let's loose a good solution in the pursuit of a perfect one and I think there can be a good solution by implementing some of the real world social dynamics into the virtual one.

Jerks and BS artist are nothing new but in real world we do have some tools to deal with them. IMHO, changing how some things work can create an atmosphere of healthy interactions.


Agree. The big one you missed is identity. Most hate is anonymous. Being able to filter by tags like “known racist” or whatever, and seeing someone’s history of sharing misinformation is useful but most people would self-censor if their identity was known or other users would filter out those that won’t identify.

What I wonder is what Musk will do if he finds out the scales are artificially weighted towards conservative content. Like if conservative content is artificially boosted by bots and algorithms. Facebook was much more liberal before thumbs were put on the scale. I don’t remember when but I think it was Mother Jones that saw huge traffic movement changes after algo changes like a decade ago?

https://www.theverge.com/2020/10/17/21520634/facebook-report...

Like what if the natural state of humanity is much more liberal than the American media and social media allow for? Will Musk allow that or will he see anything that doesn’t align with his views as error or manipulation?

What if a truly free and transparent self-moderating platform naturally promotes leftism more than a moderated but manipulated feed does?


A study showed that people are more aggressive online when using their real name

> Results show that in the context of online firestorms, non-anonymous individuals are more aggressive compared to anonymous individuals. This effect is reinforced if selective incentives are present and if aggressors are intrinsically motivated.

https://journals.plos.org/plosone/article?id=10.1371/journal...


Weird but I still think Elon’s idea of needing confirmed identity for a checkmark is solid. If anonymous users then are nicer than checkmarked users, I guess the filter will work in reverse? The elimination of bots will be nice if they can do it.


Requiring validated identity is anti free speech.

There's been so many words spilled online about how terrible of an idea it is to require confirmed identities for online.

Recently, see https://twitter.com/mmasnick/status/1576274615231401984 or more casually https://www.garbageday.email/p/oh-cool-were-talking-about-an...

> Generally, if your solution is virtually indistinguishable from one of the systems the Chines government is using to keep people in line, your solution is bad.


That’s an opinion. I disagree with it. It’s a private corporation not the government.

You either have people incentivized to self-identify with a checkmark or what? The alternative is to build an AI that identifies you in order to remove bots? I don’t even think that’s possible without it auto removing everyone that uses anonymizing tools like Tor?


How do you know that requiring identity is anti free speech? Not everyone online is Iranian political dissident. Sure, some people claim that you can't have free speech when your identity is known but I don't see any solid reasoning behind it.

Mike Masnick in his tweets repeats some talking points but there's no cohesive argument.


And AFAIK an anonymous political dissident wouldn’t want a blue checkmark?

Furthermore, there can be layers of anonymity. There can be anonymous publicly but not to Twitter. That’s dangerous given that Twitter cannot protect your identity from a state actor accessing its internal systems. Thus, again, why would you want a checkmark as a dissident.


Again, not every speech revolves around political dissent.


Some does though.

So the fact that this applies to some people means that it is an issue for those people.


We can have special arrangements for special circumstances.


Requiring ID verification is adding limitations on who you permit to speak. It is inherently anti 'free speech'. I think it's fine if that's the sort of website you want to build (twitter at the moment is not a free speech maximalist), but don't pretend that doing this doesn't limit speech.


> Requiring ID verification is adding limitations on who you permit to speak

Do you mean that in countries where not everyone has government ID? That's not an issue, the government doesn't have to be the authority of ID. Besides, governments can create fake IDs for covert operations anyway. I don't suggest that everyone should connect to the internet with government issued ID card.


How do you verify someone's IRL identity without a government issued ID card in a scalable way?

I don't mean some idea that could work at some arbitrary point in the future (decentralized whatever...). If a social media platform were to do this, right now, how would they do it without verifying a government issued ID?


Identity doesn't come into existence with the registration with a government, it's something you build over time as you interact with the world around you.

Nicknames are an identity and it's pretty much common these days to have nicknamed account on all over the internet. The problem with these is that one can have multiple of those and a behaviour in one place doesn't transfer into other places.

So maybe we can have across-the-internet identities. You are jasonshaev but who you are on twitter? on reddit? on other places? Once you become the person who is known around everywhere the same way, you have the identity that you would like to protect. You can't troll one place when bored then be known as a nice person somewhere else. I think that's good enough identity. The implementation can be around crypto, single sign in, face recognition etc.


The thread started with "real name." The only way to verify that is government identity.

If you want to verify some other, "online" identity, that's fine, but I don't see how that would meaningfully affect anyones behavior. To be clear, I don't think verifying someone's real name will meaningfully improve online behavior either -- plenty of other threads explain why. In which case, what's the point of either?


I struggle with how else to phrase this - Adding restrictions inherently restricts people.


You never know where the prevailing winds of online sentiment will turn next. Having your every post tagged with your identity can lead to real-life problems in the future, even if it was something edgy you said as a teenager or something you used to believe but don't any longer.


> You never know where the prevailing winds of online sentiment will turn next. Having your every post tagged with your identity can lead to real-life problems in the future, even if it was something edgy you said as a teenager or something you used to believe but don't any longer.

So maybe, for every single thought one has, one ought not fly around the world and post it on a flyer on every street corner and light post. Which is basically what posting on Twitter is.

But then I think a ton of stuff people casually do online is batshit crazy when you put it in real-world terms. Of course you wouldn't do the above. You wouldn't even do it if you had a magic button that could make it happen for you without taking time & money to go do it in person. "Post my random toilet thought on hundreds of millions of surfaces all over the world? No, god, why would I do that?"

Would you give a teenager access to such a magic button? Of course not. That would be entirely insane. Even if using the button would not, per se, get them in trouble, you'd destroy that thing or put it in a safe. Handing it over to them to do with as they please wouldn't even be something you'd consider doing.

But we live in a world where ~every developed-world kid has a button like that by age 12, and sometimes much earlier. WT actual F. Of course it's causing tons of problems. Most adults couldn't be trusted to make good choices with such a tool (clearly).


You already require confirmed identity to receive a Twitter blue checkmark. So that wouldn't be a new change.


Wouldn't self censor solve the problems just as well as deleting content?

See, because we don't say everything that comes to our mind, we are able to interact in a civil manner with people that can have any kind of opinions. In real life, I'm sometimes shocked that someone is a total bigot.

However, when civility is established we can discuss these ideas too and instead of having these people being toxic these ideas can be expressed in a civil manner and discussed. Maybe they have a point sometimes? If they do, it can be dully noted and if they don't they will be exposed to the counter arguments. Also, when ideas are expressed in civil manners, people don't label other people straight as "bigot", "racists" and accept the nuances. In fact, some prominent right-wing people are doing that, people like Jordan Peterson. Because the guy is civil, he is effective and it's up to the rest to contradict his claims in civil manners.

So yes, it is alright to have some self restraint and think before you speak. It's definitely much better than oppressing it.

edit: the comment I responded was a bit different, I guess the OP added more thoughts.


> Why do we need an absolute free speech platform? What good does it do?

I think the better question is, what harm does not having a free speech platform do? I think the answer is fairly evident when you look at how the ability to control speech has been used throughout history. The justification, I would also suggest, has always been the same as the ones being advanced now. People act like it was social media that revealed the fact that the masses will say terrible things when allowed, but in fact that was the common opinion for most of history.


We have a free speech platform though. Practically everyone has decided not to use it. I think the echo chamber that self selecting which moderators you want has created huge problems, but we can't force people to use 4chan when they don't want to.


I think you misunderstand my position. I'm not saying twitter should be unmoderated, I'm responding to the question as to why free speech is important. I would say that the internet itself is a free speech platform, and that's a very good thing. I'm perfectly content to let people converse under whatever rules they choose as long as there is choice.

I will say in regard to twitter though that excessive policing of speech creates a segregation of audiences and consequently increases echo chambers, which is probably not what those speech police were trying to accomplish. I think it would be healthier for society if it were a little more tolerant, but that's not really related to my position on free speech.


> Why do we need an absolute free speech platform?

To ensure that ideas that people want to be censored or deplatformed can be evaluated by others who want to see what they are.

If people are not able to read oposing viewpoints, it makes them less able to understand them, and why they are wrong.


But Musk doesn't want a platform for "free speech" - he wants a platform where guys like him can do or say what they like without repercussions, but where he can crack down on anyone he doesn't like.

Like it or not Twitter is about as good a compromise as you're going to get. The "free speech" places like Truth Social and Gab will happily boot you off if they don't like you. Twitter have a TOS where they are very forgiving - for the most part issuing suspensions for violations and allowing you to delete TOS-breaking tweets rather than banning you. The line for Twitter seems to be when there is actual real-world harm that can be directly attributed to your actions on the platform. So if you're getting banned from Twitter you need to have fucked up big time


As much as I would like to believe otherwise, I think I agree. Musk is all about Musk. Everything is a tool to accommodate that process.

I still think he is the best thing that came out of Paypal, but arguably that is not a tall order.


Assuming the user's comment meets the legal definition of 1A speech and is not porn, if you can show me a single instance where anyone that posted a comment that was either removed or censored in some way by anyone with access at Gab, I will personally send you USD $50 in crypto at any address you specify in the comment, after I corroborate your example/evidence with Gab's management.

I've been on Gab since its inception - the only type of comments/users that get booted are those who engage in illegal speech. Illegal speech != distasteful/hateful/politically-charged/racist/sexist/divisive/etc.

https://www.uscourts.gov/about-federal-courts/educational-re...

Is hate speech right? In most cases, probably not unless you're saying things like "f..k all pedophiles who rape children", then it's righteously motivated. Is it legal? Yes. This is a very important distinction and what gives the USA its unique character and distinguishing trait among ALL other nations who do not have these types of freedoms codified in their constitutions...in fact, in many countries you'd be sacked quietly for saying the "wrong" thing - where "wrong" is defined by whomever happens to be in power (e.g. Russia, China come to mind)


User 3 in this article was banned for "spam" which as far as I am aware is not illegal. Gizmodo received a response from Gab saying "spam is not free speech."

https://gizmodo.com/even-the-freest-free-speech-site-still-b...

Also, HN user encryptluks2 says he was banned "for making a post asking how are all the domestic terrorists Trump supporters doing after the capitol riot." They may be able to give you the info to verify.

https://news.ycombinator.com/item?id=26339259

Also, any form of sexual content is banned on gab which is also generally protected free speech. I'm not aware of anyone having been banned for it but I'm sure they are out there, and if not it is easy enough to test.


I can't message that guy directly nor can I reply to his comment to ask for more info. That kind of speech definitely falls into legal 1A speech. It also seemed like he was baiting - which is as you pointed out, not illegal speech.

I'd also like you to consider another possibility - people on the internet lie and distort the truth. A lot. Eg. he may have gotten banned, but not for what he says here on HN. Or he may not have been banned at all but knows he'll get upvotes on HN if he bashes Gab, which was initially backed by YCombinator....until it wasn't. I'll let you figure out why they stopped supporting them.


These places with less free speech aren't even as far away as Russia or China. Hate speech is illegal in Australia for instance.


Didn't peg Australia for a repressive country/government. If this is really the case, meaning, laws are codified against "hate speech", it's pretty much the beginning of the end of them, IMHO. Free expression should be as close to absolute as possible, everywhere....


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

Correlation doesn't imply causation, but it does wiggle its eyebrows meaningfully in causation's direction.


It tells me there's places of free speech on the internet but what is really wanted here is forced listening.

Free speech on small fringe sites somehow doesn't count because it's not forced on people that don't want to see it. It's pretty clearly not a free speech issue at this point.


You've hit the nail on the head. All these "free speech absolutists" actually want "gratis reach" not "free speech".


Exactly. And they do not understand that with free speech does not come freedom from consequences.

There are plenty of places with free speech. You can walk outside and start saying some pretty insane stuff right now and people may hate you (consequence) but you are unlikely to have any type of legal consequence.

The social media one gets me the most because even if you do get rid of all moderation it is no secret that there is some algo out there amplifying some voices and not others. And in a way that is just censorship with extra steps.


That's a good observation!

I get unreasonably irked when some subreddit won't let me post a comment / removes a comment. Like, "who do you think you are to limit my ability to express myself here?!" Deprived of context, my contribution is meaningless -- so, how does that interact with the freedom of speech, really?

Say I'm in a crowded square where people are arguing about squids, and I have a squid-related revelation I'd like to share, but the self-appointed Guardians of the Square have gagged me. Is this an infringement on free speech? I'm free to leave the square and speak -- but what I have to say is relevant within the confines of the square, not elsewhere.


It'd be weird and very abusable if free speech ever implied the right to a venue.

If they couldn't stop people talking at their venue what's to stop someone completely sabotaging their agenda?


I believe that there is a time, a place, and a proper amount of just about everything, including toxicity. People who wish to interact with 4Chan and its culture need to understand what it really is. The anonymity affords unfiltered reaction and you should never expect your posts to be treated with the kind of social norms that non-anonymous and pseudo-anonymous platforms provide. While the default experience is to have your posts largely ignored, if you actually want honest and unvarnished opinions on your idea then 4Chan is the place to solicit it. As long as people go in with the understanding that nothing posted there should ever be taken seriously and that it functions as counter-cultural catharsis, the perceived toxicity becomes a feature, not a bug.


> if you actually want honest and unvarnished opinions on your idea then 4Chan is the place to solicit it.

I actually doubt the kind of trolling that happens on 4chan can be described as honest. Unvarnished maybe.


What's being censored currently beside hate speech and child porn?

There's lots of things demonitized and not getting recommended.

But what else is censored?


[flagged]


4chan does not have the concept of “Internet points”. Are you thinking Reddit?


No. I'm talking about 4chan.


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

Here, you've solved it. "The free market", both users and advertisers, demand content moderation. If you want to attract users, you need a website that isn't a cesspool of 'toxicity'. If you don't want to drive away those who actually pay for your website (advertisers), you'll need to moderate.

Reddit has proved this out - they started out trying to say they're hands off, and they'll only remove illegal content (ignoring how troublesome that is to define for a global website), and they've slowly learned over the years that they cannot grow their website with those policies.

You could say that you don't want to grow your platform, and stay a small niche, which is totally fine. That's what gab and parlor and 4chan are. We have them already!


They could grow the website just fine - it was growing the advertising (read: profits) that is the problem.

And you'll know the death-knell for reddit is here when they crack down on the porn.


No, it wasn't just about advertising. Some types of content (piracy, child porn, etc) would get them in actual legal trouble


that falls under illegal content in the US which they never didn't remove. Maybe not piracy but there's plenty of piracy subreddits last I checked. What made them start changing things iirc was the backlash over the Icloud(?) celebrity nude leaks.


> Musk and everyone else is right that we do need a platform with free speech. The only problem is that all those free speech advocates are actually NIMBY's when it comes to free speech.

Agreed. Truth Social/Parlor as "Free speech" spaces is 100% laughable.

> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

AKA: Every totally-free-no-holds-barred-speech sites, there is a reason it's a stupid goal held by people either too naive or those using it as a dog whistle.

> Yet again, I like that Musk and Kanye kind of people claim that they want free speech because at least we can hold them responsible when they don't deliver it.

Yes, because we have such a good track record of holding liars accountable...

> This is in contrast with the pure fascist where they cannot be held responsible for anything because they don't claim virtue in first place.

I can't even with this line. We have plenty of fascists running around claiming mountains of virtue and lying through their teeth. Their base/audience continues to blindly follow them and holding any of them accountable (especially by their base) is a pipe dream.

> It's a bit like companies doing greenwashing, which can be exposed when they don't deliver on their claimed virtues versus companies who don't even claim such virtues and instead pretend that it doesn't matter.

Again, this just isn't happening at scale.

> Those who claim virtue are better even if they ultimately fail.

False.


Absolute free speech and anonymity is a toxic combination.

Free speech in the way it was envisaged in the constitution presumes there is a feedback loop back to the emitter of the speech. Anonymity breaks that feedback loop. Anyone who tells you that free speech without consequences has ever existed pretty much anywhere is lying to themselves and to you.

If you want anonymity you need some measure of bounds on speech in those places.


Counterpoint to your anonymity argument: we’ve learned in that past few years that people are very much OK with being publicly identified with hateful ideologies/ideas, e.g. MAGA supporters. A lot of them publicly post and participate online under their real identities, a lot of of it on Facebook.

Anonymity is only a deterrent when you are the odd one out. When the President of the US is the one spouting the insanity you don't have to hide anymore.


> Anonymity is only a deterrent when you are the odd one out. When the President of the US is the one spouting the insanity you don't have to hide anymore.

The problem with anonymous online echo-chambers is that it lulls those in the community that there are more of them in the real world than there really are, which emboldens people to take their online craziness into the real world. This goes for everything from politics to "The raid on Area 51"


That sounds like a cyber safety issue more so than a problem with anonymity. Remember all those lessons about not sharing your real address or name online? Add another caveat that the two idiots fighting may, in fact, be the same person putting on a show. In fact, the crowd cheering them on is also half-composed of the same guy (the other half is organic popcorn-watching).


People also act differently when they're free from any risks of the behavior - and this holds true with money, social interactions, whatever.


Mass shooter manifestoes have cited both 4chan posts and named newspaper, TV and radio personalities. People are quite capable of being terrible under their own names.


Exactly. Personally I’d rather see more of internet communities/products regulating access to anonymity. This opinion is intensely unpopular around here, but IMO it addresses the root cause. The internet made it instantly easy and cheap to have multiple identities. Don’t get me wrong, we still need that tool in a modern human’s toolset, but it shouldn’t be cheap and easy to generate low value spam. Imagine if Twitter had a community reputation system and Twitter itself never removed any Tweets but just let the community downvote them into nonexistence, like we do here…


Community reputation (mostly) works the same as a lack of anonymity, it means your actions are tied to your account (in re twitter) and some extent to your pseudonym.

I'm a furry, furry is a community built around an isolation between our IRL identity and our online one. But the community is tight knit enough that your reputation will follow you around - the identity you created for yourself yes - but still your identity, and if you're too far out of bounds, you get quietly (or loudly) excluded from the mainstream of the community. It largely functions the same way as tying your real name to every online identity.

Now take something like twitter - you start with a karma of say 75, anything less than 100 karma, and your tweets wont show up in searches, anything less than 50 and you start to disappear from timeline - even for followers, anything less than 30, you disappear from lists - effectively this creates an automatic shadow banning system.

But a saving grace, you earn a quarter point of karma just by not having any negative interactions on the site, you could also earn positive karma by upvotes on content.

You could also put some other bounds in there too, like limiting how much positive karma or negative karma a single post could earn, to prevent it from skewing the numbers too much (it should be based on a weighted average of interactions, not just on one tweet that goes viral and the rest of it is low effort shitposting).

Ideally you'd have a cross site 'identity' service that would also carry along a weighted karma score from all of the places you interact, and allow people to see those links - you're still abstracted from your real identity, and you're always welcome to start over again, abandon your account and start from zero, but there is persistent history of your interactions.


"Social media made y'all way too comfortable with disrespecting people and not getting punched in the face for it." - contemporary philosopher


I mean, we haven't had a society that was free of violence either, but that doesn't invalidate the cause of nonviolence.


> Absolute free speech and anonymity is a toxic combination.

Yes, because in the real world - if you say something hateful enough to the wrong person - you'll get your head knocked off.

So people have some sort of filter.

When you take that away - the trolls with no lives come out just to agrivate people because misery demands company.


Free speech was not "envisaged in the constitution." Free speech is an unalienable right of men, which our Creator endowed us with.


We don't need a free speech platform. People need a choice as to how the content they see is moderated. That's what forums used to do, until we collectively decided to just put all the forums on Facebook or Twitter rather than having separate forums for each interest.

Everyone thinking "we need free speech on Twitter" has lost the plot, and, as you mentioned, the revealed preference of people who claim they want free speech is actually toward heavier moderation (but moderation they like).

Most of them go on Twitter for one kind of content, and on Parler or "Truth" Social for another kind, and they don't really want the streams to cross. We used to have this in the 90's and early 2000's. The question is how to get it back.


How about we just go back to not using one big central social communication platform and go back to the spirit of using separate independently owned forums, chatrooms and websites for our little niches and communities to prevent this issue.


How does this prevent the issue? You can say horrible things on smaller forums as well


Why are we concerned with people saying horrible things per se, and not with the fact that the horrible things are amplified on a Twitter platform? In the conversation above, the smaller forums idea lets people go where they want. If you wander into an offensive place and you get offended, that becomes on-you, and then we do away with the complaint about Twitter promoting the bad and people getting offended inadvertently.

The idea of chasing after evil ideas is flawed from the outset.

It's not new, either, which is why this very long hacker news thread bothers me. I usually like to wade on on these topics more extensively, but here the entirety of the population is applauding free speech being a shitty idea, without any historical conversation.

Oh and it's incredibly US centric. Freedom of speech is a principle that was discussed in the Enlightenment and beyond, and fought for (first against religious authorities in ancient times, then against religious authorities in the 20th century). It happens to exist in the 1st amendment as a government limitation, but as a principle and a moral it is well beyond that.


People are not necessarily all on the same side of defining "the issue", FWIW.


The parent said this

"How about we just go back to not using one big central social communication platform and go back to the spirit of using separate independently owned forums, chatrooms and websites for our little niches and communities to prevent this issue"

Regardless of what the issue is how would being on a smaller forum prevent this?


And? People are allowed to say horrible things. I can join the forum with none of the stuff I deem "horrible", you join the forum with none of the stuff you deem "horrible", everybody is happy!


Sure, and people who don't want to circle jerk about it can leave. This is exactly how it always worked. I've left a lot of toxic forums.


we can't uninvent the smartphone but every day I wish we could


You are onto something, but companies doing the moderating are doing what they can to ensure that:

a) new entrants can't exist ( Parler, Truth.. whatever ) b) the rules are so generic that they ensure given platform can ban whatever

And this is why people clamor for simple free speech slogan.

If this is how we understand it, then we do need free speech.


The tragedy of the commons happened. The old situation had a few fatal flaws (mostly discoverability) that meant that new entrants could take almost all the oxygen out of forums by having a weird form of "mass appeal." It worked for a while, but the cracks are starting to show.

Unfortunately, I doubt that we can put the cat back in the bag.


Twitter et al should allow users to choose their own third-party moderators and feed sort. Kind of like choosing your preferred ad blocker. That way everyone can see their preferred curation.


You're basically looking for people like Stallman, who is a very rare kind of person. He thinks about the principles he would like to advocate for and then he fleshes out how these principles interact and how to deal with tensions or contradictions.

Most people who declare their affinity for a value or policy or position merely do so tactically; they think that there's a short-term relationship between the furthering of some movement and their ability to get closer to what they want. Such people are allies when convenient.


> Yet again, I like that Musk and Kanye kind of people claim that they want free speech because at least we can hold them responsible when they don't deliver it.

Really? They've never defined free speech, so how can we hold them accountable?

The idea of a "free speech absolutist" is a complete joke destined for legal consequences. For example, I don't think they mean free speech is the ability to post obscene content, threats, state secrets, corporate IP, or any other legal restrictions on free speech: https://en.wikipedia.org/wiki/United_States_free_speech_exce...

Plus, there are all kinds of free speech reductions they could make, like only allowing palatable content onto top trending recommendations. So, you're not banning free speech, but you're actively restricting it based on what's attractive to advertisers or even the general public.

If they said "all legally permitted speech on our platform will be given the same protection and visibility based on metrics that do no include the meaning of the speech", then that would be something concrete that we could hold them accountable to.


> the ability to post obscene content, threats, state secrets, corporate IP, or any other legal restrictions

Free speech opponents always like to point to these examples when the topic of free speech comes up, but I've never seen free speech advocates use an example of any of these as examples of problematic censorship - instead, they (we) point out voluminous examples of unfashionable opinions being removed. In fact, if Kanye or Musk came out and said "free speech except for" and listed your (specific, easily definable) examples, I'd still agree with them that they were advocating for free speech.


> I've never seen free speech advocates use an example of any of these as examples of problematic censorship

Just look at some of the exceptions to free speech: https://en.wikipedia.org/wiki/United_States_free_speech_exce...

* Incitement

* False statements of fact

* Obscenity

* Fighting words

* Threatening the president of the United States

These are all grey areas that are constantly being tested and censored to varying degrees on multiple Social Media platforms. Trump was banned from Twitter because of “the risk of further incitement of violence” after Jan. 6. So it's very much at the crux of the issue on the debate whether or not Trump's speech on Twitter should have been protected by the platform or not, which is what really catalyzed Parler and Truth Social's branding in the market place as "pro- free speech".


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

I posit that this is the unavoidable result of free speech absolutism.

People like their idea of what absolute free speech will be like, but they don't like the real thing when they see it.


> The only problem is that all those free speech advocates are actually NIMBY's when it comes to free speech

It's unclear from your comment. Are you saying that Musk and West are our free speech champions and everybody else is a poser?

Because Musk and West do not care about free speech either. It's the problem these rich idiots all claim to have: "I need a platform where my voice can be heard," while their voice already gets top spot on the trending Twitter page and on newspaper front pages.

Of course Michael Spicer said it better: https://www.youtube.com/watch?v=hrqhgTjFkLo


Musk and West are part of "all those".


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

And even 4chan has rules and mods to enforce them.


I don't like how "I'm a free speech absolutist" has become "I want to force you to read my toxic rants, even when you try to get away from me." You yourself say 4chan is "barely bearable."

What free speech has meant historically is "I don't believe the government should be able to criminalize certain kinds of speech." It never meant "I am entitled to insert garbage into someone else's newspaper or book" until people started misappropriating the term.


It's more that "absolutism" tends to be bad, no matter what you apply it to.


That is what the block button is for.

Most people who are in favor of free speech, are perfectly fine with you personally clicking the block button.

Instead, what they don't want, is a centralized platform preventing consenting parties from engaging with each other.

See the difference?


What exactly is the difference, again? A block button is prone to breaking thru numerical power. If enough people are responding to you and or harassing you there is no manageable way to block all of them, especially without first passively engaging with their posts in the first place.

So instead people build addons and blocklists to manage all of that for them. Now you have a separate centralized platform for dealing with a certain subset of users.

And if that doesn't work or if they don't want to put in that effort they just leave the platform instead. They go to somewhere else where the social agreement with the platform is to automatically filter out or remove those users. Hence why few people actually use the free speech platforms like Gab, Parker etc.


> What exactly is the difference, again?

Think about 2 situations. Person A, wants to see the content of Person B. So person A voluntarily chooses to see the content.

And the other situation is Person C, does not want to see the content of person B, so chooses not to do so.

> there is no manageable way to block all of them

Yes there is. Someone could choose to allow an automated method of blocking people that they don't want to see.

As long as nobody is forced to use this automated moderation, or can change it, while still having access to that platform, then it is fine.

> Now you have a separate centralized platform for dealing with a certain subset of users.

No actually, it is quite a bit different. The difference being that a person could chose to modify this blocking authority. It is all fine and well to have blocking authorities, as long as I, the user can turn it off, if I choose to do so, on that platform, or otherwise modify my own blocklist, or add a white list.

So that is the solution. Feel free to have blocklists. Just let me change the blocklist, for myself, if I disagree with it.

There, everyone gets what they want.


>...Instead, what they don't want, is a centralized platform preventing consenting parties from engaging with each other

Who doesn't want that? The billions of people that interact with Twitter and Facebook, or the people who are sympathetic to the fringe racist beliefs that dominate "free speech" sites?


This is a weird take. Most folks are old enough to remember the pre trump-era internet and society where alternating views were allowed and engaged with as opposed to leading to bans and social cancellation. Conflating that with allowing people to shove views in your face is odd, your ability to tune out is distinctly different from one’s ability to broadcast.


We have a platform with free speech: your own website! I don't get why these personalities who are charging this free speech narrative don't just decamp with their massive following to their own website. It's like arguing a bar has no right to kick out a drunk and disorderly person, because "free speech." Sorry, people can kick you out of the place they own, if you don't like that then start up your own bar/twitter/etc.


> at least we can hold them responsible when they don't deliver it. This is in contrast with the pure fascist where they cannot be held responsible for anything because they don't claim virtue in first place.

What's the plan to do that when they censor all dissent on the platforms they just bought?


Then do what Gab, Gettr and Truth Social did and 'build your own platform'. If not that, then use a Mastodon instance like mastodon.social as an alternative?

The only thing these networks censor is anything that is illegal in their hosted jurisdiction i.e the US.


The fediverse (network of activitypub servers, including mastodon) itself doesn't censor anything at all, but if you don't moderate your instance then other instances might decide to stop federating with you to protect their users.


As far as I understood, they're not doing that on a case-by-case basis, but are using centralized block-lists (which every admin can choose to follow automatically). That's a step up from raw centralized censorship, but it's not a big step.


Most admins AFAIK don't have centralized blocklists but when the new bad instances pop up there tend to be #fediblock posts pretty fast, and they get shared pretty widely.


The only thing these networks censor is anything that is illegal in their hosted jurisdiction

Truth Social has apparently banned (and shadow banned) lots of people for posting anti-Trump and anti-GOP political messages.


Probably similar to what is currently happening on all the other platforms censoring dissent.


What do you mean? Be a billionaire and buy the platform?


You don't actually need to be a billionaire to have a platform. It helps to quickly own one but you can always build it yourself.

Those who felt censored just created their own social media. Some prominent figures in extreme right community are blue collar workers.


When it happens we’ll call you.


>The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

welcome to true free speech on the internet. the worst and most abrasive of the bunch drive everyone away.


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

Almost as if allowing absolute free speech has consequences, almost as if there was a reason absolute free speech isn't a thing anywhere in the world... we might be onto something


Okay I'm going to try and engage in a good faith reply here:

"Free speech absolutist" does not mean "absolute free speech", you're misunderstanding the premise here. The term does not mean that anyone should be allowed to say anything they want at any time.

The phrase means that we should permit any LEGAL speech. Where "legal" has tons of historical precedent and can be decided by the country.

We've seen time and again, that if the ability to speak freely isn't a priority, then censorship grows quickly. If you don't believe that there is a ton of censorship happening on these platforms with a specific set of biases (the biases that the employees of these companies carry) then I would say that you might not be viewing the situation with an open mind.

This is becoming a problem because the Internet has become the new town square where people learn what's going on in the world and talk with each other. If the people running these platforms are allowed to suppress speech that they don't like and promote speech that they do like, then they wield an incredible amount of power. This power is rife for abuse, both by people inside the corporation and within government.

You are correct though that 4chan is a gross cesspool, though its one I believe should be allowed to exist simply because I believe freedom of speech is important. The problem with that site is obviously that it's anonymous. Coupling anonymity with free speech is a recipe for bringing out the worst in people, but a system where peoples' identities are out in the open and where they can speak freely is good in my opinion. We need more free speech and we need to engage with our fellow countrymen and find common cause, otherwise this insanely polarized partisan situation will continue to get worse.


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

I’d argue that every place that allows every kind of speech without restriction will eventually degenerate into a cesspit. You lose the reasonable people quickly because they don’t want to deal with the toxicity and it’s all downhill from there.

> The paradox of tolerance states that if a society is tolerant without limit, its ability to be tolerant is eventually seized or destroyed by the intolerant. Karl Popper described it as the seemingly self-contradictory idea that in order to maintain a tolerant society, the society must retain the right to be intolerant of intolerance.

Source: https://en.wikipedia.org/wiki/Paradox_of_tolerance


I think the main thing that would help there wouldn't be to force any particular platform to allow all speech, but to make it so anyone who wanted to set up such a site or platform could do so without fear of being bullied off the internet by an angry mob or moral puritanism. Web hosting services, firewall services like Cloudflare, payment processers like Visa and Mastercard and internet providers should be regulated as utilities like water and electric companies are.

If that was done, then there would be far less issues here. Those who want free speech focused platforms could create them, and those who didn't want to use them wouldn't have to. The problem at the moment is that not only is there no place for free speech online, but any attempts at creating one can be bullied off the internet by an angry mob on social media because of companies and PR.


> at least we can hold them responsible when they don't deliver it.

These are people who have a track record of evading responsibility.


> because at least we can hold them responsible when they don't deliver it

This is a common but I believe overstated, even naive, ideal. What exactly does "holding them responsible" even truly mean? If a company is greenwashing and they are still emitting carbon, what really is the difference between the company who never claimed to care at all? The carbon is emitted all the same. "Oh, the stock price would fall because investors would lose trust." But greenwashing is a dime a dozen these days and I think the investors/upper class know that greenwashing is just marketing and don’t truly expect/care about the cause.

Regarding this, how does anyone hold Parler accountable for making a platform of "free speech"? Either you sign up or you don’t. If you sign up and complain they aren’t extreme enough, they don’t care or at least they don’t have any material reason to care. If you don’t, where else are you going to go? Twitter? But the whole demographic is people who didn’t like Twitter in the first place and want to be with their kind. So how do you "hold them accountable" without say, legislation, regulation, and government oversight, something today’s "free" speech advocates are opposed to?


"I hate what you say but I will die defending your right to say it". In all places, these people are curating comments and posts to push agenda.

It's incredibly tiresome, not just online but in real life. There is no freedom vs control debate. There's just the people who advocate arresting those who teach their children inconvenient truths vs those who advocate arresting those who use naughty language.


The 2020 election was stolen isn't an inconvenient truth, it's not naughty language, it's a lie that the majority of GOP house candidates are furthering. You're mischaracterization of the debate leads me to believe you made this comment in bad faith.


I was actually talking about CRT and the history of slavery in the United States. That's the inconvenient truth legislatures in some states are attempting to arrest people for teaching. Plenty of people are trying to keep election conspiracy theories off social media but as far as I know nobody is trying to pass a law against spreading them.

You're mischaracterization of the debate leads me to believe you made this comment in bad faith.

And there it is. Advocate for free speech in front of a left-leaning audience and you're a conspiracy-spewing Republican. Advocate for free speech in front of a right-leaning audience and you're a child-grooming communist. Like I said, tiresome.


Right, no one actually wants free speech, they're just mad when the moderator disagrees with them.


That's kind of a hilarious way to frame each side of the debate. Hilarious in a bad way that doesn't give you the benefit of the doubt on the topic.


"The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable."

Yes, this is what you'll end up getting on so called 'free speech' platforms. Because, unfortunately, these days what people really mean when they say 'free speech' is actually a veil for them to say hateful things about marginalized groups of people.


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

And what might we learn from this?


> Yet again, I like that Musk and Kanye kind of people claim that they want free speech because at least we can hold them responsible when they don't deliver it.

What does "holding them responsible" look like? Making pseudo-anonymous comments on HN calling them out?

Giving them the benefit of the doubt at this point that they are acting in good faith seems hopelessly naïve and exactly the smokescreen they are looking for. They're saying whatever Bullshit(tm) it takes to get through this week with the best possible outcome, and you want to circle back in a couple years? No one will care or remember remember what the initial statement was.

https://en.m.wikipedia.org/wiki/On_Bullshit#Lying_and_bullsh...


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

It's very interesting that you don't see the straight line between "absolute" free speech and toxicity.

Absolute free speech IRL is moderated by physical and emotional stimuli and inhibitions against direct confrontation and bucking social norms. There are also legal repercussions, such as libel or defamation suits, for particularly harmful speech. The anonymity, and lack of accountability or feedback to one's words makes people far less inhibited online.


> The only somewhat free place I've seen is 4Chan but it contains so much toxicity, that's its barely bearable.

You've found the problem. There _are_ "anything legal" online spaces, and they _suck_. There's no way to have 0 moderation and not have the place turn basically into 4Chan.

If people really want to be on totally free speech platforms, they can just go on 4Chan, but what they really want is to force everyone else to engage with the toxic shit they want to say and no one else wants to hear.


Forgive my ignorance: I thought you could basically say/post anything on Twitter as long as:

* it was not actively illegal (CP, terrorism etc)

* hacked info

* deadnaming trans people

Is that not the case anymore?


That is not the case, and hasn't been for years.


I read the rules and the Wikipedia page and that's all I can find. What else is there?


https://help.twitter.com/en/rules-and-policies

why would you go to wikipedia to get a copy of twitter's policies? Anyways based on what you've said so far you haven't actually been paying attention. It's been year's since Dorsey went Rogan's show and took one his trust and safety lawyers with him to do all the legal talk so he could maintain deniability.


I read that. It seems to back up my point. Moderation is minimal outside of the specific cases I listed...


Interesting how often "free speech" seems to equate with "free to be offensive jerk". If your best argument is presented in s way that makes it sound like "You're ruining my life! I hate you forever!" then maybe you should go for a timeout and come back when you can discuss things calmly and rationally. I listen to arguments, not tantrums and swearing

Edit: typo


> but it contains so much toxicity

Easy. Allow all legal speech, but make it very easy for each individual user to block what they don’t want to see.

Maybe even allow external providers to offer filters. Like an App Store but for content filters.

Let each individual decide how much and what kind of censorship they want.


> Yet again, I like that Musk and Kanye kind of people claim that they want free speech because at least we can hold them responsible when they don't deliver it.

How do you propose to do that, if you can't hold reddit, twitter et al accountable for the same today?


Free speech is only a concept in a judicial sense. E.g. If you come barging into my house spewing racist shit I may not be able to call the police in you for being a racist, but i'm throwing you off my property


Free speech advocacy was trendy with the left when the right held more institutional power. Now it's embraced (at least as a slogan) by the right. In much the same way I don't trust Apple to give a damn about software freedom, but they backed Webkit and clang for their own self interested reasons. I accept that being "on the side of free expression" means that I will need to shift political allegiances now and then as power consolidation shifts. Along the way the tools for enabling and preserving free speech get better and better though. Long live the anti-establishment wingnut right... until they win, then I'll be hanging with Jimmy Dore.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: