1. Platforms with no moderation (8Chan -- except probably even worse, because even 8Chan moderates some content)
2. Publishers that pre-vet all posted content (the NYT with no comment section)
3. Platforms that retroactively moderate content only after it's been posted, in whatever way they see fit (Twitter, Facebook, Twitch, Youtube, Reddit, Hackernews, and every public forum, IRC channel, and bug tracker ever built)
Revoking section 230 just gets rid of option 3. It's not magic, it just means that we have one less moderation strategy. And option 3 is my favorite.
Option 2 takes voices away from the powerless and would be a major step backwards for freedom of expression. It would entrench powerful, traditional media companies and allow them greater control over public narratives and public conversations. Option 1 effectively forces anyone who doesn't want to live on 8Chan off of the Internet. Moderation is a requirement for any online community to remain stable and healthy.
Even taking the premise that Twitter is an existential threat to democracy (which I am at least mildly skeptical of), it's still mind-boggling to me that people are debating how to regulate giant Internet companies instead of implementing the sensible fix, which is just to break those companies up and increase competition. All of the "they control the media and shape public opinion" arguments people are making about Facebook/Twitter boil down to the fact that ~5 companies have become so large that getting kicked off of their services can be at least somewhat reasonably argued to have an effect on speech. None of this would be a problem if the companies weren't big enough to control so much of the discourse.
So we could get rid of section 230 and implement a complicated solution that will have negative knock-on effects and unintended consequences for the entire Internet. Or, we could enforce and expand the antitrust laws that are already on the books and break up 5 companies, with almost no risk to the rest of the Internet.
What problem does revoking section 230 solve that antitrust law doesn't?
I would generally agree with everything you said here except that Option 1 is really just Option 3 except the "way they see fit" is very minimal. Moderation still exists on those "unmoderated" sites. No right-minded person supports completely unmoderated content like no right-minded person supports completely unregulated free speech. Child porn is the most obvious example of an exception to both. We can all agree that we don't want to see that and don't want to host it on our platforms. Once you accept that, it basically becomes a question of negotiating where that line is. It is reminiscent of that old inappropriate Churchill joke about haggling over price [1].
> Child porn is the most obvious example of an exception to both. We can all agree that we don't want to see that and don't want to host it on our platforms.
I wouldn't be surprised if some site like 8chan was happy to completely remove moderation/filtering if the federal penalties for unknowingly distributing CP were removed.
Basically, "look at 8chan! Even they restrict CP, so everyone supports moderation!" doesn't actually follow, since 8chan is legally required to restrict CP under federal law, unknowing or not.
I think a difference of kinds and not just degrees can be established between moderating just illegal content.
But maybe not, given that there is a lot of different interpretations of what is illegal and judgment calls have to be made over that, as well as issues of jurisdiction and even issues involving laws that may be unconstitutional.
Yes. The problem is that, once the ability exists to nuke whatever you consider to be illegal, someone else will use it to nuke stuff that you like.
If every country could impose its standards on the Internet, there'd be no Internet. It's only worked so far because the US has strong free speech rights, and has dominated.
And at the same time my ISP cares less and censors less here in Eastern Europe... :P You will most likely not get a letter for downloading movies, games, and so forth.
The problem with just limiting it to illegal content is who gets to decide what is illegal? Websites don't have jurisdictions in the classical sense. Should website follow German laws and ban Nazi imagery? Should they follow follow Polish law and ban blasphemy? Should they follow Russian law and ban homosexual imagery? Should they follow Chinese law and ban support of for an independent Hong Kong?
I think your missing an option that is kinda similar category of 1.
The platform only removes illegal content, but moderation is not done by the provider at all. Instead users moderate the content themselves.
Think of how search engines can apply safe search filters going from safe, moderate, and off. So let users mark content with tags/categories, or some kind of rating. Then users can had posts that have or don't have certain content tags or a rating below x ect...
Also you have platforms like discord that lets users create areas dedicated to a topic and self moderate.
I like putting users in control of moderation and giving communities the power to self-determine what content they see. This is one of the premises behind the fediverse. However, I think that's just option 3.
If we accept that users have a right to filter content, then users should also have the right to use automated tools to filter content. If users make a block-list, they should have the right to share that block-list with other people. By extension, they should also have the right to delegate that filtering to another entity, like a forum moderator. They should be able to pay that entity money to maintain a block list for them.
And jumping off of the Discord example, if users have the right to collectively ban people from a community, they should also have the right to automate that process, to share ban-lists, and to grant moderators the ability to ban people on their behalf.
I don't see a _fundamental_ difference between Reddit banning someone and a single subreddit moderator banning someone. It's just a question of scale.
To circle that back to the antitrust question: I regularly see comments about revoking section 230 that say, "well, this would only apply to large companies." It seems that in general, most people are fine with small communities moderating themselves and banning bad actors, even if the criteria is arbitrary. It only becomes a problem for them when Facebook does it because Facebook is big.
We could go to Facebook and say, "you're too big, so you can't moderate on behalf of your users." Or, we could go to Facebook and say, "you're too big, we're breaking you up and making smaller communities." It seems to me that the second option is a lot simpler.
It's not the same as option 3 because nothing is being removed from the site without a specific legal requirement. There are no "bans", and any user can see all content posted by other users if they so choose; the filtering is merely a suggestion. Option 3 involves the site removing unwanted content so that no one can see it, not just hiding it by default.
As much as I personally like this option, however, I still couldn't agree with making it mandatory. Sites should be able to choose which user-contributed content they will or will not host, without being deemed liable for contributed content merely because they haven't chosen to censor it. (Frankly the idea that anyone would be subjected to legal reprisal for the content of a post—whatever it may contain—is an affront to freedom of speech and an unjust, disproportionate punishment for a victimless "crime".)
The legal answer is that we have narrow exceptions to freedom of association around businesses (and a few other places) refusing service and closing doors to very specific protected classifications. Arguably they should be a little less narrow, but that's straying into a values answer. Of the attributes you list, race and gender are pretty strong protected categories. Age to a lesser degree. Marxism is a political category, there is no law that says Twitter couldn't ban someone for being Marxist.
These are very narrow categories, and for most part anything outside of them is fair game.
---
From a values perspective, I think our current system is pretty good.
We don't restrict the ability to ban users except in very specific cases for very specific protected categories. Those categories are narrowly defined, based on strong needs that communities have for protection. We can expand those protections of course, but the fact that we have very narrow exceptions to users' Right to Filter[0] in narrow situations does not mean that we should ditch the entire thing. Just because your content is technically legal does not mean you have a right to force me to consume it.
In other words, I'm quite sympathetic to the argument that our protected categories should be broader in some instances. I'm not sympathetic to the argument that because protected categories exist, a forum shouldn't be able to ban Republicans or Democrats. If you want to take away a community's Right to Filter on an attribute, I believe you should be required to demonstrate an extremely clear, compelling need to protect that specific attribute.
And in particular, I don't see any reason at all to expand those protections to political categories.
----
While we're on the subject of laws, it's always important to remind people that in the US, hate speech is protected speech -- and that's probably not going to change any time in the near future. The Supreme Court has been remarkably consistent on this point for a pretty long time.
This has implications for what revoking section 230 means.
Before section 230, the Supreme Court ruled that platforms that had no knowledge of the content on their site -- that were not moderating it in any way -- couldn't be constitutionally held liable for speech on their platform[1]. Absent 230, if Facebook wants to call itself a platform, it needs to fall back on that ruling. So if they see someone harassing you for being Jewish, or female, or Marxist, or older than 40, Facebook won't be able to ban that person. Engaging in any moderation will make them liable for all of the content posted on their site. And no company-run corporate platform is going to risk personal liability for all of their content just to ban nazis.
Given that reality, I don't think any progressive should be arguing for the removal of section 230. Using a legal standard for moderation will make the Internet way more toxic for underrepresented groups than it already is today.
I have heard people (usually non-progressives) make the argument that this is fine, because if no one is banned from any community, everyone can participate. In other words, let the trolls run wild. I don't have much respect for that argument. If you allow every open forum to become toxic, only toxic people will be able to stomach them. Safe spaces foster diversity, and there is no way to build a safe space for user-generated content without moderation. If you don't believe me, then go join 8Chan and have fun, I guess.
I know a few progressives are hoping that the user-generated content will go away entirely, and that Jewish/female/Marxist/middle-aged people will just participate on traditional channels. I think this is also kind of naive. No one who's seriously looked at the history of traditional media channels would come away thinking they've been bastions of the progressive movement. Marginalized voices have been excluded from those channels again and again, and it's only by fighting, by circumvention, and by self-publishing their own stories and building their own collectives and communities that those marginalized voices have been able to make themselves heard.
Most users can't be trusted to moderate content because they will censor content they don't like, even if the content is legal and compliant with the terms of service.
I censor content I don't like on every website I visit, by running an adblocker that filters legal ads that are compliant with that website's terms. The filter list is entirely community maintained, and I can extend it as I see fit, and then share my extensions with any other user. Even though, and I want to stress this -- all of the content I'm blocking is legal.
And that's a pretty good setup. I like it a lot.
Absent narrow exceptions with extremely compelling justifications, users should be generally allowed to filter any content that they want for any reason, and they should be free to form communities around those filters.
I like this. Reddit should not be allowed to remove r/* because they don't agree with the politics of the subreddit. To enforce it, all members, individually, should be eligible to arbitrate the removal. Given the arbitration cost burden to the company, this would make them think twice.
It's worth mentioning that nearly every board on 8chan IS moderated, by the board owners. Its exactly the same as the Reddit model, paid admins only enforce the few site wide rules and board owners are left to moderate their boards however they see fit. That might be heavy or light moderation, but if people are upset with the moderation style they just make another board.
Isn't technically the internet, as a whole, operating as #1 (at least, in theory?).
If I want to post material that is sketchy, or even illegal, I can typically get away with posting it somewhere. It means that I have to host the content but it also means I have total control.
So in a way, revoking section 230 would inevitably break up the big sites by forcing people who are interested in posting/hosting content that others disagree with on their own.
While this would have some economic impact (just like demonitization), it might also mean that the amount of content might go down because people would have to defend themselves and their use of said content.
In a way, I'm a bit torn about this because revoking section 230 would seem like it comes about it from a different angle: removing the protection would allow the "free market" to respond whereas the other option would be the government "forcing" the breakup.
Revoking 230 would disrupt big sites, but would likely further cement their monopoly. Liability for user generated content means that sites that feature user generated content need to spend a lot of resources on moderation, because the consequences of a false negative are large.
People who talk about section 230 breaking up large companies seem to be harboring the assumption that section 230 will only apply to large companies. This is not the case. It will be a massive blow to any site that features user generated content. A blow that only large sites have the resources to withstand.
I think if this happens, decentralized social networks will really take off.
The "small sites" you refer to , which I'm assuming are typical forum type sites, have been dying a slow death for the past few years anyway, since FB ate their lunch.
Small sites would be absolutely devastated because the threat of being held liable for user comments will be way too large of a risk. Small networks don't have the resources to pre-moderate comments or build sophisticated automated systems of identifying high risk content. How many people would host their own site when the consequences of user-submitted illegal content being posted could mean millions in liability? Very few, if any. The result is that large players are the only ones that can survive in a market where companies are held liable for user submitted content.
Maybe if by "small" you're talking about dozens of people. But I fail to see how comparably small sites like Hacker News could continue to exist without section 230 protections. It'd probably take fundamental changes like charging users a subscription in order to pay for enough moderators to pre-moderate comments and submissions and I am unsure if that's even a viable approach.
Has anybody found a good model for scaling display filtering that respects users' priorities and sensibilities, rather than platform owners' and governments'?
One thing I keep hearing is that doing moderation that gives users an experience they like is a ton of work, is very experience, and can be hard on the moderators' mental health if they do it a lot and the user base is large enough. (For example, some of the moderators might be watching videos of real people dying or getting raped all day long.)
In Mastodon and so on there are instance administrators, typically volunteers, but they don't scale well and probably don't deal well with large variations in people's beliefs, culture, and interests, nor with overlapping group memberships.
Meanwhile, major platforms are spending millions of dollars paying people to do moderation as a full-time job, in a heavy-handed, error-prone, arbitrary and centralized way where the only options are sometimes "delete this for everyone", "ban the user", or "allow this for everyone". (The "sophisticated" systems may also allow things like "hide this behind a sensitive content button", "hide this from users who don't claim to be over 18", and "ban this in certain countries".)
More sophisticated, nuanced, and pro-speech filtering work that doesn't aim to uphold a single worldwide standard seems like it will be even more expensive; who will do it and how will we incentivize it?
> More sophisticated, nuanced, and pro-speech filtering work that doesn't aim to uphold a single worldwide standard seems like it will be even more expensive; who will do it and how will we incentivize it?
That's certainly what I'd want. The standard federated model of shared filtering rules being applied at the node level is way too prone to generating filter bubbles. Also, people who are genuinely open-minded and interested in free discussion would never put up with it, and they're the most fun to be around. That's why I've never gotten into Mastodon.
I'd want filtering rules that participants applied locally. The model used by ad-blocking browser extensions, Pi-hole and such would arguably work. But of course, sets of rules would be far more complex, and it'd be best to hide them under a simple user interface.
Let's say that someone open-sourced the sort of moderation repo that major platforms are developing. With adjustable automated pre-filtering, and human tweaking.
So then any participant could run that, and publish their rules. Other participants, maybe mega nodes who got paid for their efforts, would consolidate all those rules into coherent sets, and then publish those. There'd be multiple such services, and they'd compete on filter type and quality.
> For example, some of the moderators might be watching videos of real people dying or getting raped all day long.
That is a problem. I know that personally. For many years, I relied on hearsay about Freenet, because I was way too paranoid to actually run it locally. But eventually, I developed the skills to run it on a remote server, with adequate anonymity. So I did.
I won't go into detail. But let's just say that I can not imagine how any decent person could moderate some of that stuff without mental damage.
So as distasteful as it might be, perhaps the system could incorporate filter rules from people who enjoy that stuff. They'd be motivated to adequately anonymize their contributions, of course. And if they succeeded, there'd be no leverage for anyone to be forced to identify them. And if they were careless, that would be great.
Once their rules had been characterized, however, they'd be reversed before integration with everyone else's. And the same methodology could be used for all other widely distasteful categories of content.
It's difficult to say anything general, given how widely standards vary. I was originally going to say that the system would guarantee anonymity. And that's really the only workable option, because otherwise you've built in a vulnerability.
There is a 4th option - treat certain surfaces of social media companies, mainly the user specific pages (Donald Trump's twitter page, PewDiePie's YT channel etc) as a platform; treat other surfaces which are generated based on algorithms and not directly by individual users (FB feed, YT watch feed, reddit homepage etc) as a publisher.
That way, social media companies won't be responsible for what their users upload. But they will be responsible for what they present to their users to optimize clicks / engagement / revenue etc.
I think a much better solution would be to revoke Section 230 for "recommended" content. "Recommendations", even algorithmically generated ones, are behind a lot of the brouhaha, and by endorsing content that's basically being a publisher of content anyways.
> What problem does revoking section 230 solve that antitrust law doesn't?
From the point of view of AG Barr, whose objective is to weaponize the DOJ for the purpose of expanding Russian oligarch interests in the US, revoking section 230 is effective because it is laser-focused on tech companies. On the other hand, antitrust law affects all monopolists in all industries equally - and most of those monopolists are Barr's allies. He doesn't want to bite the hand that feeds him.
What role do you think anonymity plays in this discussion? Communities would self police much better if some level of anonymity was removed from members; its less likely that “John Davidson” would post a racist rant than “xxTrump2020xx”. Removing some anonymity also comes with a huge slew of issues, but it would definitely resolve a lot of content problems if there was a real world link to online behavior.
> Communities would self police much better if some level of anonymity was removed from members
This isn't an illogical theory, but I think Facebook's real-name policy disproves it.
We could have a deeper conversation about whether or not anonymity is more important than civility (I think it is). But I think before having that conversation I would want to be convinced that anonymity significantly impacts civility, and I'm much more skeptical of that claim today than I used to be. I tend to suspect now that it's more of a "common-sense" theory than something backed up by real-world studies.
That makes intuitive sense, but hasn't matched what I've seen in real life. I have had much more interesting and polite conversations on Freenet (which provides almost complete anonymity) than I have had on Facebook, back when I used it. I'm guessing it's because the audience is smaller and less visible. I think people posture for the invisible audience they assume are watching their every post, so larger communities tend to go south much more quickly than smaller more focused ones. You also get more group-think and mob-like behavior. As another example, smaller subreddits on reddit are much more polite and fun (in my experience) than the main subs. Whenever a subreddit I'm on gets linked to r/all, I just skip that post for my general mental stability. Youtube comments are another example, which are generally terrible in my experience and are visible by a huge potential audience.
> Communities would self police much better if some level of anonymity was removed from members; its less likely that “John Davidson” would post a racist rant than “xxTrump2020xx”.
A lot of Facebook users have no problem writing racist rants using their real names.
I do not think that the companies that have the funding and technical ability to effectively astroturf facebook/twitter et al, would be rendered even mildly impotent were there 3x as many social media companies as there are now.
Its a systemic failure on the way these companies act, not just a lack of competition.
When facebook is showing friends and family that I made a comment, they fit fairly within the scope of a neutral carrier, akin to a phone call or text message.
When facebook republishes a NY Times post or headline, on the official NYTimes facebook page, they are acting as a republisher, and so be it. Any user will associate the story with the NY Times.
But when facebook publishes my post and headline, on the official "MY Times" facebook page, they are taking my content, and publishing it, as a publisher. And they are doing so, in a manner that intentionally gives me the same appearance of gravitas as the NY Times.
When Facebook then decides to show my post instead of the NY Times post to other people in their feed, they are then curating and distributing this novel content. That is to say, they are publishing this content, as a publisher.
Your 3 options fail to differentiate between this nuanced but important difference.
Facebook should not be required to police the content between individuals posting as individuals.
And Facebook should not be held responsible for redirecting traffic to established publishers and content providers.
But claiming that individuals, who are unable to publish without Facebook's platform, are also publishers, and therefore Facebook is not liable for their content, despite Facebook's actions to then selectively curate, distribute, and, in a word, publish their novel content, is not the spirit of 230.
The distinction about how sites like YouTube and Facebook amplify content is important. But moving away from the digital version, it would seem strange to me if Barnes and Noble became liable for having a big display for a book that defamed someone. INAL so maybe they would be, but as a thought experiment I lean towards them having significantly less responsibility than the author.
Facebook is not acting like Barnes and Nobles displaying a book.
The closer analogy would be if, by becoming a B&N member, you were allowed to submit manuscripts to B&N. B&N then curated a small selection of those manuscripts to display in a format identical to the other books they sell, prominently in the front of their store. They then began advertising for those specific titles to bring traffic specifically to that store display.
The analysis you present is very misleading. The CDA doesn't say anything about retroactive moderation or pre-vetting, but grants legal immunity with no regard for moderation styles. So how do you get from here to there?
> So we could get rid of section 230 and implement a complicated solution that will have negative knock-on effects and unintended consequences for the entire Internet.
The antitrust route seems much more complicated to me because prosecutors would have to bring a new case to court for every major company, at taxpayers' expense.
What's complicated about repealing a 2-clause law? Is there another part to this process I'm not aware of?
> What problem does revoking section 230 solve that antitrust law doesn't?
Aside from being much simpler and less expensive to taxpayers? It would solve the problem once and for all by removing the source of the problem, instead of forcing the government to handle it one company at a time.
> The analysis you present is very misleading. The CDA doesn't say anything about retroactive moderation or pre-vetting, but grants legal immunity with no regard for moderation styles. So how do you get from here to there?
This is correct. Section 230 removes liability for user content regardless of moderation. The moderation argument is a red herring.
> What's complicated about repealing a 2-clause law? Is there another part to this process I'm not aware of?
There is tomes of case law relying on Section 230 of the CDA, and much of the internet is able to operate as it does today because of it.
Without Section 230, services that enable users to share content would be encumbered with a mountain of liability that would most likely scare off investors.
Email providers are protected by Section 230, as are messenger apps, forums, Usenet, chat in games, etc. Social media as it exists today would be risky for any entity to host that couldn't afford to vet all content before serving it to others.
Email and usenet both existed and ran just fine before Section 230.
There's good reason to think that low-volume, heavily-moderated forums like this one would have no problem without the CDA.
> Social media as it exists today would be risky for any entity to host that couldn't afford to vet all content before serving it to others.
"As it exists today" - and why should we assume that the present manifestation of social media is the best possible one? Why assume that large-scale content moderation is an unsolvable problem? It may be that the only reason present-day tech companies haven't solved it is because they don't need to.
If all their employees who are currently focused on getting people to click ads switched over to developing efficient moderation, could it be solved?
> Email and usenet both existed and ran just fine before Section 230.
Email and Usenet didn't have the eyes on them and the billion dollar coffers legal teams could drain that they do today.
> There's good reason to think that low-volume, heavily-moderated forums like this one would have no problem without the CDA.
Without equivalent legislation that removes liability for hosting user content, all it would take is a cease & desist or arrest for someone to decide that the risk of hosting a free forum like HN just isn't worth the legal liability.
> "As it exists today" - and why should we assume that the present manifestation of social media is the best possible one?
Nobody suggested that the current manifestation is "the best one". You're posting on a forum where many people are able to amass small fortunes working for, and selling to, companies that regularly rely on Section 230. One only needs to look at the tomes of case law created by these companies based on Section 230 to recognize this.
> Why assume that large-scale content moderation is an unsolvable problem?
Again, nobody suggested that it is an unsolvable problem. It's a solvable problem with giant piles of money and an extremely lengthy payroll.
The problem is that the liability for user content being unwaived has a chilling effect for those without giant piles of money and an infinitely long payroll. GeoCities, personal blogs, personal sites, hobbyist forums, and other mainstays of the nascent internet would either exist with massive censorship or not at all.
> If all their employees who are currently focused on getting people to click ads switched over to developing efficient moderation, could it be solved?
How many people would be able to bootstrap a site that allows users to upload content in any form if they needed a billion dollar payroll to hedge against the liability of being sued out of existence, or being raided and arrested in the middle of the night?
> What's complicated about repealing a 2-clause law?
There's substantial consensus among Internet scholars that this would change the entire Internet ecosystem in either negative or at least highly unpredictable ways. It's complicated because we need to consider and plan for the consequences, and because it opens up an entire new set of legal distinctions that have never been applied to the Internet and will need to be established and refined over decades.
Repealing 230 is simple in the same way that SESTA/FOSTA was simple -- there are lots of things that seem simple until you consider the details.
On the other hand, antitrust law is pretty widely established, it's something we need to be applying across the entire market (both online and offline) anyway, and breaking up companies has comparatively fewer fundamental consequences for the wider.
It's a mistake to measure complexity in lines of law, in the same way that it's a mistake to measure program complexity in lines of code. The real test is, "how much of the system does this change, how invasive is it, and do we know what all of the effects will be?"
The details and complications aren't the general taxpayer's problem, though. It's Google & Facebook's problem. We don't take any of the profits they made off of Section 230 -- why should we have to pay to clean it up?
> It's a mistake to measure complexity in lines of law, in the same way that it's a mistake to measure program complexity in lines of code.
Removing code that's outdated is usually a step in the right direction.
And when you fix a bug, you always want to fix it at the source of the problem.
> The details and complications aren't the general taxpayer's problem, though. It's Google & Facebook's problem.
It's everyone's problem who wants to start an Internet business. And by extension, it's the taxpayer's problem because presumably they use the Internet.
Passing laws is like fixing bugs live on a production machine, because we don't get to go through a testing phase. When you're in production, you should almost always do the simplest, least invasive fix you can.
On moderation: If you treat platforms as liable for content posted, their only opportunity is to censor anything that might cause them to be liable.
In practice, this amounts to option 2 (the NYT). The NYT is not a forum. It pre-vets all of its content and runs it by a team of editors. You can't run an open forum like HN or Reddit that way. I don't like option 2, because I would argue having a place where anyone can communicate and publish information outside of locked-down, establishment media channels is really good.
If you tell platforms that they won't be liable as long as they don't moderate/censor (the "true platform" argument people bring up), then you've taken away their ability to moderate at all. That's how you end up with every open platform looking like 8Chan (option 1). I would also argue that allowing communities to filter and ban bad actors is necessary for an inclusive, open Internet.
The innovation of Section 230 was that it gave companies, forum owners, and platform maintainers permission to moderate. It created option 3. Owners didn't have to make a decision between blocking everything or nothing, because they couldn't be held liable for user content at all, regardless of their moderation strategy. That meant that they could be as aggressive (or passive) with moderation as they liked without worrying that it would make them liable for any content that they missed.
Section 230 is an attempt to deal with two facts -- first that moderation is fundamental to healthy communities, and second that when users have the ability to instantly post their own content there is no system (human or AI driven) that will ever be able to moderate perfectly.
So far from being a misleading sidenote or a jump in logic, content moderation was the reason why section 230 was passed to begin with. From its very inception, section 230 was always about allowing a middle ground for moderation.[0]
> One of the first legal challenges to Section 230 was the 1997 case Zeran v. America Online, Inc., in which a Federal court affirmed that the purpose of Section 230 as passed by Congress was "to remove the disincentives to self-regulation created by the Stratton Oakmont decision". Under that court's holding, computer service providers who regulated the dissemination of offensive material on their services risked subjecting themselves to liability, because such regulation cast the service provider in the role of a publisher.
Thanks, I didn't know about those cases. This is one of my favorite topics in tech and I learned something interesting from our discussion.
As I've pointed out to other commenters in this thread, I still think your analysis makes too many assumptions based on the present day legal environment of the web. You have to agree with me that, because of the broad scope (granting ALL internet service companies immunity to legal actio) and the timing of the bill (the early days of the popularization of the web) we don't really know what the legal environment for web businesses would be like without Section 230. This legislation came in so early and changed everything so drastically, we don't know if the courts would have found a middle ground to allow for some moderation, or if people would have found more efficient ways to moderate content over the years. Section 230 essentially froze the process in time by handing all legal power to the internet industry.
Arguments I've read about why Section 230 is good for the internet tend to rest on statements about how the internet works today - specifically, the way today's internet service companies run the web's most popular sites - but not a single one of these companies existed before the CDA was passed. For all we know, without the CDA, the internet would still be CompuServe, AOL, Prodigy. Or perhaps other business models would have been invented. I think it's a mistake to assume that the current internet is the best possible internet when we haven't really seen any other.
That's fair -- I will grant you that there's a lot of uncertainty about what would happen now. I don't think it's completely blind, I lean towards "there are predictable negative effects", but we don't really know. And it's totally reasonable for someone to be less certain than me.
My response to that though is still that uncertainty is not a great position to be in when passing laws. I would point at SESTA/FOSTA as examples of legislation in the same rough category that looks like it should make sense, and then gets passed and has a lot of side-effects that turn out to be really bad for everyone. If SESTA/FOSTA had passed and everything had gone wonderfully, I might be more open to other conversations about adding additional liability.
> The details and complications aren't the general taxpayer's problem, though. It's Google & Facebook's problem.
Actually, it's any company's that wants to do that problem. And between multi-billion dollar established companies and smaller/starting up companies, who exactly do you think will be most impacted?
The problem with this opinion is that you forget what powers these platforms: Money. Section 230 allows platforms to profit off illegal and harmful content with no responsibility whatsoever.
When Facebook promotes falsehoods, they profit. When Google sells links to malware and scam sites at the top of search, they profit. And even if they get around to moderating these abuses of their platforms, they keep the profit from the harm done via their platforms.
If we want platforms to have the proper incentives to moderate content properly, they need to lose money when they fail, or at least, fail to profit when they fail. Ad money for malware campaigns and scams should be confiscated.
But right now, malicious actors on ad platforms drive up the revenue on bidding-based platforms, and no matter who wins the bid, that equals money for ad companies. They have fundamentally no incentive to police content that's making them money.
> The problem with this opinion is that you forget what powers these platforms: Money. Section 230 allows platforms to profit off illegal and harmful content with no responsibility whatsoever.
That's phrasing it in a way that makes it misleadingly clear who's in the wrong: of course all good people will not want something like that to happen.
The devil is in the details. How much does that "illegal and harmful" content help their bottom line? Is it making 50% of their income? 10%? 1% or less? After knowing the percentage value can you still make a general statement like "companies are profiting on illegal and harmful content"? Or the more correct statement would be "companies make extremely little profit on illegal and harmful content"?
On the other hand, before proposing mandatory moderation for all user content on all platforms we have to ask more questions. How much would it cost if all content is moderated in order to catch that X% that is "illegal and harmful"?
> How much does that "illegal and harmful" content help their bottom line? Is it making 50% of their income? 10%? 1% or less?
The problem is that nobody actually has that data (and they'll claim your point is invalid unless you have the data, which only they even have the potential to gather). Google and Facebook will tell you it's very low, but since they're ignoring reports of malicious apps on their platforms and restoring scammy ad campaigns after they've been reported and taken down internally, what they consider bad content is likely much smaller than reality.
Honestly, with both the amount of flagrantly malicious content I see on Google and Facebook ad platforms, and the network effects that their participation has on bidding for placement, I would suspect that these companies are nearly dependent on malicious content for profitability.
Another huge point is that a lot of legitimate advertisers are paying just to protect their brand from having scams placed above their own site in searches for their trademark. This is pretty close to an extortion racket.
I think there’s a Big Short level event on the horizon where we discover some of the top valued companies on the planet are built on a lot more of this than they’ve let on.
1. Platforms with no moderation (8Chan -- except probably even worse, because even 8Chan moderates some content)
2. Publishers that pre-vet all posted content (the NYT with no comment section)
3. Platforms that retroactively moderate content only after it's been posted, in whatever way they see fit (Twitter, Facebook, Twitch, Youtube, Reddit, Hackernews, and every public forum, IRC channel, and bug tracker ever built)
Revoking section 230 just gets rid of option 3. It's not magic, it just means that we have one less moderation strategy. And option 3 is my favorite.
Option 2 takes voices away from the powerless and would be a major step backwards for freedom of expression. It would entrench powerful, traditional media companies and allow them greater control over public narratives and public conversations. Option 1 effectively forces anyone who doesn't want to live on 8Chan off of the Internet. Moderation is a requirement for any online community to remain stable and healthy.
Even taking the premise that Twitter is an existential threat to democracy (which I am at least mildly skeptical of), it's still mind-boggling to me that people are debating how to regulate giant Internet companies instead of implementing the sensible fix, which is just to break those companies up and increase competition. All of the "they control the media and shape public opinion" arguments people are making about Facebook/Twitter boil down to the fact that ~5 companies have become so large that getting kicked off of their services can be at least somewhat reasonably argued to have an effect on speech. None of this would be a problem if the companies weren't big enough to control so much of the discourse.
So we could get rid of section 230 and implement a complicated solution that will have negative knock-on effects and unintended consequences for the entire Internet. Or, we could enforce and expand the antitrust laws that are already on the books and break up 5 companies, with almost no risk to the rest of the Internet.
What problem does revoking section 230 solve that antitrust law doesn't?