On moderation: If you treat platforms as liable for content posted, their only opportunity is to censor anything that might cause them to be liable.
In practice, this amounts to option 2 (the NYT). The NYT is not a forum. It pre-vets all of its content and runs it by a team of editors. You can't run an open forum like HN or Reddit that way. I don't like option 2, because I would argue having a place where anyone can communicate and publish information outside of locked-down, establishment media channels is really good.
If you tell platforms that they won't be liable as long as they don't moderate/censor (the "true platform" argument people bring up), then you've taken away their ability to moderate at all. That's how you end up with every open platform looking like 8Chan (option 1). I would also argue that allowing communities to filter and ban bad actors is necessary for an inclusive, open Internet.
The innovation of Section 230 was that it gave companies, forum owners, and platform maintainers permission to moderate. It created option 3. Owners didn't have to make a decision between blocking everything or nothing, because they couldn't be held liable for user content at all, regardless of their moderation strategy. That meant that they could be as aggressive (or passive) with moderation as they liked without worrying that it would make them liable for any content that they missed.
Section 230 is an attempt to deal with two facts -- first that moderation is fundamental to healthy communities, and second that when users have the ability to instantly post their own content there is no system (human or AI driven) that will ever be able to moderate perfectly.
So far from being a misleading sidenote or a jump in logic, content moderation was the reason why section 230 was passed to begin with. From its very inception, section 230 was always about allowing a middle ground for moderation.[0]
> One of the first legal challenges to Section 230 was the 1997 case Zeran v. America Online, Inc., in which a Federal court affirmed that the purpose of Section 230 as passed by Congress was "to remove the disincentives to self-regulation created by the Stratton Oakmont decision". Under that court's holding, computer service providers who regulated the dissemination of offensive material on their services risked subjecting themselves to liability, because such regulation cast the service provider in the role of a publisher.
Thanks, I didn't know about those cases. This is one of my favorite topics in tech and I learned something interesting from our discussion.
As I've pointed out to other commenters in this thread, I still think your analysis makes too many assumptions based on the present day legal environment of the web. You have to agree with me that, because of the broad scope (granting ALL internet service companies immunity to legal actio) and the timing of the bill (the early days of the popularization of the web) we don't really know what the legal environment for web businesses would be like without Section 230. This legislation came in so early and changed everything so drastically, we don't know if the courts would have found a middle ground to allow for some moderation, or if people would have found more efficient ways to moderate content over the years. Section 230 essentially froze the process in time by handing all legal power to the internet industry.
Arguments I've read about why Section 230 is good for the internet tend to rest on statements about how the internet works today - specifically, the way today's internet service companies run the web's most popular sites - but not a single one of these companies existed before the CDA was passed. For all we know, without the CDA, the internet would still be CompuServe, AOL, Prodigy. Or perhaps other business models would have been invented. I think it's a mistake to assume that the current internet is the best possible internet when we haven't really seen any other.
That's fair -- I will grant you that there's a lot of uncertainty about what would happen now. I don't think it's completely blind, I lean towards "there are predictable negative effects", but we don't really know. And it's totally reasonable for someone to be less certain than me.
My response to that though is still that uncertainty is not a great position to be in when passing laws. I would point at SESTA/FOSTA as examples of legislation in the same rough category that looks like it should make sense, and then gets passed and has a lot of side-effects that turn out to be really bad for everyone. If SESTA/FOSTA had passed and everything had gone wonderfully, I might be more open to other conversations about adding additional liability.
In practice, this amounts to option 2 (the NYT). The NYT is not a forum. It pre-vets all of its content and runs it by a team of editors. You can't run an open forum like HN or Reddit that way. I don't like option 2, because I would argue having a place where anyone can communicate and publish information outside of locked-down, establishment media channels is really good.
If you tell platforms that they won't be liable as long as they don't moderate/censor (the "true platform" argument people bring up), then you've taken away their ability to moderate at all. That's how you end up with every open platform looking like 8Chan (option 1). I would also argue that allowing communities to filter and ban bad actors is necessary for an inclusive, open Internet.
The innovation of Section 230 was that it gave companies, forum owners, and platform maintainers permission to moderate. It created option 3. Owners didn't have to make a decision between blocking everything or nothing, because they couldn't be held liable for user content at all, regardless of their moderation strategy. That meant that they could be as aggressive (or passive) with moderation as they liked without worrying that it would make them liable for any content that they missed.
Section 230 is an attempt to deal with two facts -- first that moderation is fundamental to healthy communities, and second that when users have the ability to instantly post their own content there is no system (human or AI driven) that will ever be able to moderate perfectly.
So far from being a misleading sidenote or a jump in logic, content moderation was the reason why section 230 was passed to begin with. From its very inception, section 230 was always about allowing a middle ground for moderation.[0]
> One of the first legal challenges to Section 230 was the 1997 case Zeran v. America Online, Inc., in which a Federal court affirmed that the purpose of Section 230 as passed by Congress was "to remove the disincentives to self-regulation created by the Stratton Oakmont decision". Under that court's holding, computer service providers who regulated the dissemination of offensive material on their services risked subjecting themselves to liability, because such regulation cast the service provider in the role of a publisher.
[0]: https://en.wikipedia.org/wiki/Section_230#History