I agree with the notion that governments shouldn't be able to coerce communication companies to echo their own positions. However, if they use a built in function of the software (like a 'delete comment' button) to moderate their account, then I don't see how that's a 1st Amendment violation.
However, even with the first point, I am a little confused. Maybe someone can explain the legality of this to me: the 1st Amendment talks about Congress not being able to establish laws to prevent freedom of speech. Is a government official asking a social company to do something count as the creation of a law? This in contrast to Congress actually creating a law to make a social media company do something regarding free speech, which would be a violation.
I think there are two issues here. First of all, the freedom of speech issue has generally broadened to include all of government. Generally government can't ask a company to restrict speech, Even outside of a law
But there is more to the first amendment. We also have a right to communicate with the government. If that avenue is shutdown, it can be a first amendment violation as well.
> euphemism for federal intelligence agencies strong-arming social media companies to censor American citizens
Specifics matter.
If a hurricane is coming and a meme is countering FEMA’s advice, I’d want them to be able to pick up the phone and let Facebook know. They shouldn’t be allowed to compel anyone, explicitly or implicitly. But the notion that any communication between the government and social media platforms is censorship is as extreme as arguing that the government should have takedown powers.
I agree that specifics matter, but we don't need to argue about semantics. Thanks to the twitter files we know how this played out in practice. And it wasn't how you're describing: it was federal agencies acting as de facto censors of Americans posting information they didn't like. This happened at scale, to hundreds of thousands of posts. That's what "contacting social media companies" is referring to here, and it's a disgusting euphemism on the part of NYT.
> Thanks to the twitter files we know how this played out in practice. And it wasn't how you're describing: it was federal agencies acting as de facto censors
Sorry, where in the Twitter files did it show this?
What I saw was campaigns on both sides giving Twitter feedback, and then Twitter employees independently deliberating.
I’ve seen a few cases from that and they absolutely looked like agencies pointing things out and sometimes Twitter’d go “oh yeah, that’s bad and against our TOS” and take action, and other times they’d go “nah, it’s ok” and… not take action. Also curious to see if there’s evidence that what I’ve seen were outliers rather than the norm, and in fact Twitter was normally just doing whatever was asked of them even if it wasn’t actually a clear violation of Twitter’s rules being pointed out.
There seemed to be a lot of confusion between "govt brought something to companies attention" and "Jack Booted Thugs neck squeezed social media company to censor for them."
Everything I’ve seen from the Twitter Files makes it look like nothing bad was going on. It’s done the opposite of making me believe there’s a scandal being exposed. “Oh, that’s the worst that was ‘uncovered’? Cool, glad everything was actually fine.”
Wild how the same primary source can be taken so completely differently by different sets of people.
I'm sorry, I object in the strongest possible terms to characterizing the USG's involvement with social media to be "bringing something to their attention". They are not some friendly internet anon or third-party researchers, they are state agencies with the power to make life very difficult for any company they decide isn't compliant enough.
In First Amendment jurisprudence, there is a very specific bar as to when government involvement becomes unconstitutional coercive state action. That bar requires direction from the government and a (potentially implicit) threat of government action or retaliation if compliance is not met. Bellyaching about what a company or individual doing doesn't meet that bar, otherwise pretty much every single congressional hearing would be illegal.
As far as I'm aware, no one has yet produced any evidence that the US government reached that bar with respect to Twitter. There's definitely been several lawsuits that have attempted to allege this--and some where the judges tried to push the plaintiffs to actually properly allege this--but nothing has ever stuck, primarily because everyone filing these suits seem to think that the government looking in a social media company's direction somehow makes it automatically a First Amendment violation.
Whether or not the involvement documented rises to the level of a 1A violation is an interesting legal question but ultimately not the one that I'm primarily concerned with.
The question I care about is: is the government taking a direct and active role in censoring online speech? The answer is a clear yes, and nobody is really disputing it. I find the arguments here ("well twitter helped", "it wasn't technically illegal") to be very weak defenses. Paid employees of federal agencies should not be involved in the moderation processes of social media companies.
And as far as the legal question is concerned, so far federal courts have ruled that the government's actions do in fact violate the first amendment:
> he court found that some of the communications between the federal government and the social media companies to try to fight alleged COVID-19 misinformation "coerced or significantly encouraged social media platforms to moderate content", which violated the First Amendment.[21]
So I do stand corrected there, although I will note that the Fifth Circus is a poor guide to understanding the First Amendment. Indeed, if you take two of its relevant decisions that are currently being appealed before SCOTUS, it believes that it is a First Amendment violation for the US government to talk to you about your moderation policies, but it is not a First Amendment violation for Texas to talk to you about your moderation policies (or we'll sue you if you don't want to listen to us).
Thank you for the link. That does strike me as inappropriate (if not quite coercive).
> power to make life very difficult for any company they decide isn't compliant enough
This is what I don’t see evidence of. (Take Twitter/X in its current form.)
Reading those messages, I see employees at Twitter who are ideologically aligned with the President acting a little too chummily in restricting someone’s speech. Twitter wasn’t doing anything it didn’t want to do.
> the notion that any communication between the government and social media platforms is censorship is as extreme as arguing that the government should have takedown powers.
Do you feel the responsibility to argue this, or is it just a bare ass appeal to the moderation fallacy? Is not killing your wife just as extreme as killing her? Are we required to come up with a way to half-kill her in order to sound like serious, nuanced thinkers?
> is it just a bare ass appeal to the moderation fallacy
Both ends of that extreme happened.
Doughty’s preliminary injunction effectively banned the government from communicating with social media platforms [1]. And the appeals courts found that the government had given itself de facto takedown powers.
It's fairly accurate though. The preliminary injunction in that case blocked government agencies from even talking to social media companies (though some of that got reversed by the 5th circuit).
This is a cudgel that will be used by both/all sides as they see fit to stifle narratives and manipulate platforms they disagree with.
The only option is to pursue freedom of speech because any method to define what is acceptable and not will be manipulated by those who can.
As you can tell by the greyness of this comment, some people have a problem with not being able to manipulate speech.
For reference, look at the definition and usage of the word terrorism over the last 30 years. Compare 1990s' usage with today's usage. You would think people knew what terrorism meant back then --like defining acceptable speech today. What will it mean tomorrow?
Anybody, including the government, should be allowed to contact a publisher, point to something it is publishing on its site, and ask, “Hey, do you think that might violate your terms or editorial standards?” and leave the ultimate decision to the publisher. My understanding is that this is all that happened with Twitter.
The public, yes. The government? No. They ought allow any positive and negative or neutral speech about them or their policies or policies they are responsible for or contemplating. And definitely not if it has anty8ng to do with elections and interfering with elections of their own, or others'.
So if someone posts that age-old hoax/troll claiming “Remember! Election Day for Party X is one day later!” The government should not be able to warn the social media company about election hoaxes they are hosting?
The risk is that a D would be very concerned with a D-related hoax, but not at all concerned with an R-related hoax or vice versa. By selectively notifying social media companies they could then influence elections.
Given the numerous other options already available to the government through its official powers, I also don’t see that as being at all necessary to resolve the problem.
So, on that question, no. It seems to be all risk and no reward.
What? Maybe if people can be fooled so easily they ought not be voting in consequential elections.
But, no the Govt should not be getting into regulating speech. Platforms should adopt things like community notes ala X to counter hoaxes and other nonsense on platforms, but not the gov. lest we slide into the situations they've got in Turkey and Russia, Egypt, etc, for example where everything is ostensibly to counter misinformation but in actuality results in control of political speech by the party in power.
I agree that is the ideal situation but I don’t know that it’s a good way to do things.
It is significantly more likely the answer will be “Yes, we should remove that” regardless of whether it violates their editorial standards if the question comes from power, even if no threat is implied or intended and the wording is exactly the same.
Nobody actually wants this though. Everyone agrees there are certain types of speech that should be moderated or banned. Different factions have different opinions about what the moderation should apply to, but everyone wants to moderate. For example: literally every popular website in existence.
For a privately owned website, moderation is not a violation of anyone's freedom of speech. Your freedom of speech does not give you the right to insist that you can post on a website that I own. If I don't like what you say on my website, I can ban you. You can go get your own website if you want a place to say what you want.
These cases are specifically about government officials and how they manage their accounts on social media platforms. What I find disappointing is that nobody appears to be talking about the obvious solution: an official's personal account should only be for personal posts. If the government they are officials of wants an official account to post official statements on, they should set up a separate one solely for that purpose. Then the whole issue the court is trying to deal with here would evaporate.
> For a privately owned website, moderation is not a violation of anyone's freedom of speech
It’s not a violation of the First Amendment. It is a curtailment of someone’s freedom of speech.
> an official's personal account should only be for personal posts. If the government they are officials of wants an official account to post official statements on, they should set up a separate one solely for that purpose
These rulings effectively say that if you taint your personal account with official business, you have to treat it like an official account. That should lead to a best practice as you describe.
> These rulings effectively say that if you taint your personal account with official business, you have to treat it like an official account.
I didn't gather a rule that simple from the rulings; to me the rule they described looked more like "if you are a government official and you have an account, and anyone sues you for deleting their posts or banning them, courts will have to do a complicated analysis of what you posted that the person suing you was responding to, to see whether it counts as official or not--and oh by the way, Ninth/Sixth Circuit, you are now on the hook to do that".
Also, even if we take the rule as you state it, it still is not what I was describing. What I was describing is: don't taint your personal account with official business in the first place. The government you work for should establish separate official accounts for official business.
The Declaration of Right, No. 9, said that members of Parliament have freedom of speech in Parliament--i.e., in the meeting place of the public body they are members of. That is very different from claiming the right to speak in a private venue you don't own.
> not components of the freedom of speech when talking about private property
Of course it is. We don’t have an absolute right to freedom of speech. That doesn’t change the fact that we are curtailing one person’s freedom of speech in favour of your property rights.
> For a privately owned website, moderation is not a violation of anyone's freedom of speech. Your freedom of speech does not give you the right to insist that you can post on a website that I own. If I don't like what you say on my website, I can ban you. You can go get your own website if you want a place to say what you want.
Precisely this! Posting something to Facebook is not like talking on the telephone. It is like writing a letter to the editor of a newspaper. In both cases, a moderator reviews the content and decides what to publish. In the newspaper’s case, it takes a long time plus manual review, and in Facebook’s case, it’s automated and nearly instant (with follow-up automatic or manual moderation). But the mechanic is the same.
Same on HN. When I hit the “reply” button now, I am requesting HN to please post this comment, but if they don’t want it here, they are well within their rights to nuke it.
I agree with everything you wrote here, but the reality is, a lot of people don't care about freedom of speech for website owners.
I also agree with your point about officials using social media, but the post I was responding to was reflecting on the philosophy of free speech on the internet in general.
If you don’t mention that PruneYard 1) interpreted the California constitution, not the First Amendment and 2) does not have any relationship to a coherent speech product you are either being intentionally misleading or you lack sufficient context to make sense of the law in this area.
"There are absolutely all sorts of ways that private companies can be forced by the government to allow certain speech.".
This point stands and is absolutely true.
I never mentioned anything about the first amendment. So whatever you think about that is completely made up in your head, and if anything that is you being disingenuous.
Instead, I quite clearly was talking about the free speech rights in California that were covered by that court case.
Which has absolutely nothing to do with the first amendment, other than the implicit statement that yes the government can force private companies to host certain speech in certain circumstances.
So my original statement stands and you have not directly disagreed with it in any way.
You could have just read my post to get the answer to this question.
I put the details right in there.
But here is the reason from my original comment: "This is the case even though they are a private company.".
It is because shopping malls are a private company.
Therefore, there are cases where it can be legal for the government to force private companies to allow certain speech.
Also, please directly acknowledge that I mentioned California in my original comment and that you just ignored that part for some reason in an incorrect attempt at a disagreement.
I expect you are just going to jump to another unrelated question or disagreement and not acknowledge that now both of your questions could have been answered by just reading my comment, instead of intentionally misunderstanding it.
Finally, you didn't directly say whether you agreed or disagreed with my central point which was this:
"There are absolutely all sorts of ways that private companies can be forced by the government to allow certain speech.".
I will assume this means that you have no disagreement with me in any way on this point because you just ignored it.
I disagree with the proposition that a person should be mis-citing irrelevant cases. My position that private companies can make editorial decisions to craft a coherent speech product in the commercial marketplace is well-staked.
> My position that private companies can make editorial decisions
Since you just ignored my content without a quote anything, does that mean that yes you agree that private companies can be forced to host certain speech in some circumstances? (Circumstances, like for example what was in pruneyard, or similar?)
You ignored this point again, so it seems like you agree with it but don't want to admit it.
> mis-citing irrelevant cases
It's not a miscite.
The purpose of referencing that case is to prove that yes private companies, in some circumstances can be forced to host certain speech.
Which you have not disagreed with yet. Therefore it's not a mis cite.
> Everyone agrees there certain types of speech that should be moderated or banned.
Setting aside direct calls to violence, there’s plenty of free speech absolutists that are fine with idiots saying whatever they’d like to whoever they’d like.
> 65% of Americans support tech companies moderating false information online and 55% support the U.S. government taking these steps. These shares have increased since 2018.
> Americans are even more supportive of tech companies (71%) and the U.S. government (60%) restricting extremely violent content online.
> Democrats are more supportive than Republicans of tech companies and the U.S. government restricting extremely violent content and false information online. The partisan gap in support for restricting false information has grown substantially since 2018.
Well, for some people it's more like "Everybody's a free-speech absolutist until the speech causes them ~~to get punched in the face~~ shoot someone in the face for trying to punch them in the face." And even then some of them probably still are free speech absolutists.
The common thread behind people wanting to restrict speech (be it violence or porn or whatever) is fear. People fear other people and want another organization (the government) to protect them or make someone else liable. It's practical and efficient, rather than moral. Same kind of short term thinking is why global warming will not be uniformly addressed for many generations...maybe never.
> The common thread behind people wanting to restrict speech [...] is fear.
I'm resistant to this framing because I think it could lead to some facile arguments, since ultimately "fear" (anticipation of future harm) is also behind huge swathes of laws/restrictions which are generally uncontroversial and moral.
> People fear other people and want another organization (the government) to protect them or make someone else liable. It's practical and efficient, rather than moral.
This also applies to when someone points a loaded gun at your head and screams that you've insulted their mother for the last time: You fear that other person and want another organization (the government) to protect you and make them liable!
However that doesn't make it unreasonable, surely assault with a deadly weapon should continue to be a crime.
Yes, I know what you're going to say, "nobody" does not mean literally nobody, but in this case there are (somewhat) prominent views that do literally mean no speech should be banned.
> there are (somewhat) prominent views that do literally mean no speech should be banned
Do you have an example that’s been presented coherently? Every one I’ve seen creates caveats for, at their worst, speech they don’t like, and at their best, spam. (The latter seeming like a small exception until we get around to precisely defining it.)
I would argue they're not prominent, there are some fringe individuals who might hold this philosophy, but not any organization that actually runs a site where users can speak, and certainly nobody involved in politics regardless of their political beliefs.
You know, as long as it's not the government dictating it.
Or, look at it this way, if this stifler of free speech were to add the flight logs of prominent opponents of his, I bet some of the complainers complaining about his suppression would clamor for suppression.
Pretty sure I do. I just want comprehensive rules that are publicly and readily available from the get-go, I want total transparency on behalf of moderation, and I don't want hidden actions only available to moderators. Further, I'd like all moderator actions to be publicly auditable.
Something like reddit has hidden moderator actions, shadowbans, is not auditable, and has hidden rules that only some users are subject to. HN is probably the closest thing to my ideal forum but I'd like algorithms to be publicly viewable and dang's actions to be publicly auditable.
If you don't trust a platform without transparent moderation, why would you trust their "publicly auditable" modlogs? What power do you think being able to "audit" moderator actions gives you?
> Everyone agrees there certain types of speech that should be moderated or banned.
This is a false equivalence.
Someone who is ok with literal direct calls to violence against individuals, being censored by the government, after someone's full due process rights to a trial are exercised, is much different than someone who wants most of their political opponents censored without any due process arbitrarily.
The mention of factions is opposed to the thesis of 'everyone'. Things like spam are not typically considered factional, so un-controversial in fact that most people don't even think of spam as a form of speech or its moderation as a form of censorship.
"Spam" is in the eye of the beholder, like "hate speech". There are obvious cases and there are times when the label is applied because someone doesn't like the speech.
There are situations where you don't want people to put others in immediate danger by their speech, like, for example, setting off a stampede. But still, the old, yelling fire in a crowded theater, is not in itself illegal.
Few free speech advocates ever seem to lead with the crux of not-actually-Voltaire's "I Disapprove of What You Say, But I Will Defend to the Death Your Right to Say It" by defending the kinds of speech people actually disapprove of.