Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Title 47 U.S. Code § 230 explicitly states that publishers are not liable for the content that their users post, with some minor exceptions related to sex trafficking.

No, it doesn't.

It states that online systems with user generated content (and other users on such systems) aren't treated as publishers of what their users post, with some major exceptions related to civil liability related to sex trafficking and all criminal liability regardless of subject matter. Civil liability not deriving from status as a “publisher” is also not on its face, affected, though some courts have also applied 230, controversially, to immunize against notice-based civil liability that would apply to them as distributors, even if they aren't considered publishers.



> No, it doesn't.

To be accurate, it certainly does.

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

It also says other things that I neglected to state, most importantly, that section 230 does nothing to change criminal law, so it's also fair to call me out on that.


> To be accurate, it certainly does.

No, it doesn't say they won't be liable for user content, it says they won't be considered the publisher. There is liability for content that is tied to being a publisher, and there is liability that has other bases. On its face, 230 says nothing about liability on other bases (as noted in GP, some courts have also used it to provide immunity from liability as a distributor, but that is controversial and not stated in the text.)


230 does protect platforms from liability of what their user base posts. Having run forums and chat servers for a long time, I can attest to the experience of having to moderate content and having received legal complaints. There are two major factors that people are conflating in these discussions. There is the direct legal aspect of having illicit content. The platform is covered if they make an effort to remove illicit content AND they themselves are not encouraging the illegal behavior. So for example, if they have users that also have admin roles and make sub-forums that promote illegal behavior and they do not warn/ban the admins, they may eventually be outside the protection of section 230.

Then there is the acceptable use policy of the hosting provider(s). dns, server, cdn, app store This is entirely outside of 230. If the provider gets enough complaints, they may eventually see your site as a risk and may choose to terminate your account in order to protect the image of their business. They do not want their reputation tarnished as it will affect their profits. I think that is totally fair. If you want to run a site that may likely provoke emotional response from the public, then in my opinion it would be best to find a hosting provider that accepts the risk in a contract. The contract should state what is expected of you and what you expect of them and what happens if the contract is to be terminated, such as off-boarding timelines. Smaller startups are at higher risk as they provider has less to lose by booting them off their infrastructure.

Where I believe this issue has gone sideways is what the industry believes to be considered an appropriate method of moderation. The big platforms like Facebook, Twitter, Apple are using automated systems to block or shadow-ban things they consider a risk to their company or their hosting providers. This leads to people fleeing those systems and going to the smaller startups that do not yet have these automated moderation and shadow-banning systems and that is what happened with Parler and a handful of other newer platforms that wanted to capture all the refuges of the big platforms. A similar thing is happening with that alternate to Youtube, but I can not remember what it is called. Bitchute?

Another potential problem that may confuse the 230 discussion could be that many powerful politicians and corporate leaders use the big platforms like Twitter and Facebook. They and big lobbyists and investors may have some influence over the behavior of these platforms and may be able to tell them to squash the sites that do not follow the automated version of banning and shadow-banning. Does that create echo chambers? Is that what is happening here? Not sure. If so, I predict it will push many people under ground and that is probably not great for agents that would like to keep an eye on certain people.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: