> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230)
To sum up: If the platform becomes the "information content provider", defined as "any person or entity that is responsible, in whole or in part, for the creation or development of information", then they loose the protection. The statute also excepts federal criminal liability and intellectual property claims.
Creation or development of information can exclusively be moderation, as has been shown in copyright cases. Cutting (deciding what to show and what not to show), re-arrange or changing the context can create new original work, which would make the creator an information content provider for that. At the same time, doing either of those does not automatically cause the moderator to become a creator of original work.
As lawyers like to say, it all depends on the details of the specific case. To take a extreme example outside of this twitter discussion, taking an video interview and cutting it to create a new narrative would make the editor responsible for that whole new version.
Feel free to link to the supreme court ruling that has a precedent which proves that creating new derivative works does not result in the author becoming an information content provider.
To take an fictional twitter example, blocking a user from a website is unlikely to create a derivative work. Removing a post in the middle of a twitter chain that makes up a story could change the narrative and content of that story, and if done intentionally would create a derivative work. The user could then sue twitter for copyright infringement, and if the new story is defamatory, under liability laws. We could for example imagine a rape story where the post that includeded the word "Stop" was removed, where the author would then have a legit legal claim against the moderation.
It all depend on context, intent, and the details of a specific case. The tools of moderation does not define what is legal and what is not.
It comes down to intent. If the intent of moderation is "taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected", section 230 provides immunity. "Otherwise objectionable" is very, very broad.
To that I 100% agree. if the intent of an moderation is only to restrict access to or availability of material for those reasons, then that is likely not a derivative work.
Be a bit more tolerant of other people's point of view.
Anyway, I think you are misinterpreting the intention of that sentence. It basically means that, in principle, the behavior of being a "provider or user of an interactive computer service" does not imply that it is "the publisher or speaker of any information provided [...]". But that does not exempt them from potentially being the actual publisher, and all the rights/obligations that go with it.
Trivial example: Someone publishing its work on the web (hence becoming a "user of an interactive computer service") does not imply that they lose copyright; even though they "shall [not] be treated as the publisher or speaker of any information provided [...]".
Again, IANAL, but I read a lot of copyright, safe harbor law, DCMAs, etc... and it goes like that.
> Anyway, I think you are misinterpreting the intention of that sentence.
They're not wrong. Every single time Section 230 comes up, there's somebody here arguing that Section 230 doesn't actually mean that companies can choose who they want to censor without becoming a publisher.
But it does. That was the explicit point of Section 230, and that's how Section 230 has played out in legal courts ever since it was established.
But of course, that entire debate about Section 230 is irrelevant here because Twitter hasn't censored anybody, and I haven't seen anyone give a clear reason why neutrality requirements on commentary wouldn't be outright unconstitutional, regardless of what Section 230 says.
"A defendant must satisfy each of the three prongs to gain the benefit of the immunity:
1. [...]
2. [...]
3. The information must be "provided by another information content provider," i.e., the defendant must not be the "information content provider" of the harmful information at issue."
The moment you create your own content (even if you are a content provider yourself) you lose the protection of Section 230 over that. Editing/policing content is, in most cases, akin to creating content. You cannot make a list of "staff picks" and then claim that the content comes from other sources. Putting that list together (even if you're just quoting somebody else) is equivalent to an action of creation, you are the creator of that list. You chose what to put in it and what to exclude. You ARE the original creator of this and Section 230 does not apply for you.
> The moment you create your own content (even if you are a content provider yourself) you lose the protection of Section 230.
No. Practically every social network and publisher creates their own content occasionally, yet there's plenty of precedent for companies like Google, Ebay, Amazon, Apple, and Facebook being protected under Section 230.
A better, more accurate way of phrasing your objection would be to say, "Section 230 does not protect you from lawsuits over the specific content you created." So if Twitter's company-written annotation was found to be libel, they could of course be sued over that.
But adding your own content to a forum/platform has no bearing on whether Section 230 applies more broadly to other content that you host. Take a deeper look at your example:
> You cannot make a list of "staff picks" and then claim that the content comes from other sources
This is exactly what Amazon, Apple, and Google Play do every day. And all of those platforms have been ruled to be protected by Section 230 in multiple lawsuits -- covering everything from trademark violations to defective products. The fact that Amazon has a "recommended brand" section does not mean that they are liable for everything that shows up on their store. And that's a principle that's held up in real courts over, and over, and over again.
> Editing/policing content
I don't want to keep beating the same horse, but that's not what Twitter did. They didn't edit Trump's tweet or restrict it, they added their own speech next to Trump's tweet. That has nothing to do with Section 230, it's just a generic, common case of 1st Ammendment protected counterspeech.
> Be a bit more tolerant of other people's point of view.
Why would I tolerate a blatant falsehood?
> that does not exempt them from being the actual publisher, and all the rights/obligations that go with it.
With respect, you're totally misinformed. Social media websites do not fall under any kind of "publisher" obligation, this is a totally made up meme that people spread online.
Now, if you want to argue that we should change the laws so that these websites would fall under some kind of publisher obligations, I would disagree, but that would at least allow room for "tolerance of other people's point of view". However, in terms of the actual law you and the parent are unequivocally incorrect.
I really don't know the answer to this so I'm not trying to trick you, really just trying to see how far Section 230 goes.
If a Twitter user posts child porn (which is an example of an illegal act in the US), and Twitter knows that it is on the platform and does not remove the content, do you know if Twitter would therefore become liable for the content?
(Again, this is more exploring Section 230, not about the specific controversy du jour.)
They would very likely be liable under SESTA/FOSTA, although I don't know how much precedent exists around that specific law right now. This is part of the reason why many adult sections on sites like Reddit/Craigslist were shut down after SESTA/FOSTA passed. The companies didn't want to risk extra liability in that area.
Section 230 also wouldn't have necessarily protected them before SESTA/FOSTA either, federal criminal liability was always exempted. It's just that SESTA/FOSTA made that a lot more explicit and generally widened that liability.
Section 230 isn't a blanket protection against literally anything (it also has a number of holes surrounding copyright). It's just a much broader protection than many people online think, and the areas where it doesn't protect platforms typically don't line up well with where Internet commenters think it shouldn't protect companies.
IANAL, don't go out and do something stupid and then claim that I said it was legally OK. But in general a good heuristic for talking about Section 230 online is that it's, "not unlimited, but probably broader than you're thinking." But if you're trying to launch your own service or something and you want legal advice about where exactly the line is drawn, you should talk to an actual lawyer.
> If a Twitter user posts child porn (which is an example of an illegal act in the US), and Twitter knows that it is on the platform and does not remove the content, do you know if Twitter would therefore become liable for the content?
Section 230 isn't absolute, there are several specific exceptions. One example is the FOSTA law from 2017 which explicitly overrides Section 230.
> The bill amends the Communications Act of 1934 to declare that section 230 does not limit: (1) a federal civil claim for conduct that constitutes sex trafficking, (2) a federal criminal charge for conduct that constitutes sex trafficking, or (3) a state criminal charge for conduct that promotes or facilitates prostitution in violation of this bill.
There are some other examples I'm not thinking of off the top of my head, but on a note directed more towards the general discussion, I'd point out that creating laws to limit the scope of Section 230 is illustrative of the kind of freedoms it affords site operators in the general case.
>Social media websites do not fall under any kind of "publisher" obligation
No one said they did. But also Section 230 does not imply that they're exempt of that, in the case they become such a thing. And remember that those rights/obligations are acquired the moment they are exercised.
Consider the following:
Twitter (the platform), on its official twitter account (on their own platform) decides to publish something which has legal repercussions. Are they exempt of them because of that statement on Section 230? No, not at all.
To use a different example, somebody today used the New York Times web site: Section 230 gives them immunity for anything posted by randos in the comments to their articles, where they operate as a platform.
Section 230 does NOT give the NYT immunity for anything in the articles themselves, where they operate as a publisher. However, absent S. 230 protection, those articles and their publisher still enjoy regular First Amendment protection, which is quite strong. In particular, there are nearly insurmountable obstacles for a public figure to win a defamation lawsuit in the US.
Yes, and section 230 explicitly states that moderation does not waive that immunity:
(2) Civil liability
No provider or user of an interactive computer service shall be held liable on account of—
(A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected [...]
> Twitter (the platform), on its official twitter account (on their own platform) decides to publish something which has legal repercussions. Are they exempt of them because of that statement on Section 230? No, not at all.
No, not at all, because Section 230 has nothing to say about the scenario you are describing.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information _provided by another information content provider_
In your scenario Twitter is the provider of the information, so naturally they are liable for the legal repercussions of posting that information. Everyone already understands how this works intuitively, obviously when something illegal or otherwise legally significant is posted to the internet there isn't even a question of whether or not the posting platform is legally responsible for it as long as they are perceived to be taking reasonable steps to remove the offending content. If the site operators are posting the questionable content directly then obviously they are liable.
That's not what we're talking about though, we're discussing twitter having labled Trump's tweet as misinformation. I guess you're suggesting that twitter is the "publisher" of that warning and thus they are legally responsible for it, which is true, but there is nothing illegal about what they published so the hypothetical is moot.
> Section 230 will not exempt them from what they do.
I'm glad we came to an understanding but this is a strawman. You might as well be saying "Section 230 does not exempt twitter from the law". This is very obviously not something anyone is arguing.
Social media platforms should be considered publications. A company cannot say they have an open platform and call Themselves immune if they’re going to editorialize and punish views you disagree with. Section 230 needs to destroyed.
Destroyed is too strong; it would basically terminate all social media sites, news aggregators, comment sections, forums.. everything since precisely nobody is going to sign up for the legal liability. (Except, maybe, for megacorps like Facebook, with a net gain of nothing)
I would like to see it greatly narrowed.
Even if we ignore this particular instance as a special case where the act was justified, large companies having unfettered control over most political discourse in the country, and wielding that power in an arbitrary, unaccountable way is still a problem.
The leap you're making here is that by moderating things on their platform, they suddenly become the publisher of said information. This is neither in the text of the law, nor in how the courts have interpreted it.
Yes, if Twitter publishes something defamatory in their own name, they're liable. No, they are not the publisher of any content they choose to moderate.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider" (47 U.S.C. § 230)