If I lose common carrier status because I'm blocking spam, and thus section 230 protections, am I on the hook if I fail to block a fraudulent spam comment that leads to a user being harmed?
These aren't quite the same. The paper recommends common carrier protections only for publishing that people explicitly seek out or to whom they subscribe. If you're subscribed to spam, the solution is obvious and technically trivial (unsubscribe).
The spam problem for email is not trivial because email is, by design, a public addressing system which permits unsolicited messaging.
Common carrier treatment arises when you control a unique piece of infrastructure, or, more loosely defined, a dominant one. As a result you need to provide non-discriminating access for competitors.
In my interpretation, this requirement would also be satisfied by Facebook allowing publishers and companies to create pages. A minimal threshold of importance that's equal for all page owners might be valid.
And the paper touches on this, mentioning that there's a valid role in moderating comments:
> There might thus be reason to leave platforms free to moderate comments (rather than just authorizing users to do that for their own pages), even if one wants to stop platforms from deleting authors’ pages or authors’ posts from their pages
FB, et. al., will argue that they are blocking unwanted messages based on reasonable analytics. So we will continue to see exactly the same type of feed curation as today.
The only difference (maybe) is that a platform (probably) can't just wholesale ban a particular person... but I bet that even the phone company is allowed to ban a person is deemed to be sufficiently abusive or harassing.
The fundamental problem is that there is a vocal minority who really, really don't want to believe that they are the vocal minority who everyone else wants to ignore. There isn't some magic set of laws and regulations you can create to force the silent majority to listen to you when they think you're dumb and annoying.
The first amendment grants you the right to speak and the right for people (who wish to) to listen, but it doesn't grant you the right to an audience for your speech.
You're not applying the rule correctly. Twitter and FB are preventing, say Trump, from posting at all, even to those who want to receive his messages, so they aren't just blocking his messages to recipients who don't want to receive them. That goes beyond simply "blocking unwanted messages".
A trusted user could vouch for a new user, and/or the user could go through various types of third party verification, such as phone number, ID verification, or other verification, and then the third party could vouch for them on that basis.
You could even have monetary vouches, where a third party vouched that the user paid a certain amount of money, solved a captcha, or solved some computational problem.
The user could be in control of what measures would be required to reach their screen. As it stands, social media essentially incorporates a combination of trusted user vouching (you can see who is a friend of a friend), and monetary vouching (advertising).
To underestimate how technical changes affect real-life usage?
I often see the opposite fallacy: That tech people assume technically optimal solutions to be socially and legally optimal, which they often are not. To transfer this to your example: Perhaps the social and economic landscape of society would be better off if the spectrum belonged to us all. Maybe we would have developed legal and social norms to handle the hub nature?
Are you thinking of the spam folder? Because the primary way that gmail blocks "spam" is by never delivering the email at all. Not to your spam folder, not anywhere.
I learned about this when I was unable to receive email from a personal friend.
Looking at the ur-example of the phone company, AT&T doesn't try to preemptively place calls that a household "might be interested in based on analytics."
The modern social media landscape, however, is dominated by "the algorithm." Youtube lives on its recommendations; Twitter wants you to keep refreshing its feed; even Hacker News tries to sort the front page by a combination of novelty and interest. In my opinion, this is a continuous, editorial judgment by the outlet, and it is very different from the idea of a "neutral public square" that exists as an objective point in space.
The fight isn't really about whether an individual person should have a right to send messages to other individual people on social media platforms; it's about whether the media platform has a responsibility to actively promote views without regard for their content. It's not about forcing the "shopping mall" to allow protesters (PruneYard, from the article), but instead it's about forcing the "shopping mall" to advertise that it's hosting protesters and include them on its maps and business lists.
(Edit to add:)
The article discusses this point somewhat in its section E, related to compelled recommendations, but I think the topic deserves much broader analysis. A "legally viewpoint-neutral Twitter" would be a Pyrrhic victory for proponents if Twitter could still restrict user's visibility to direct subscribers/replies only, denying the public part of its platform.
Why? Because a damn ton of them are spam / scams. So yes, ATT doesn't filter calls, and users have learned not to answer them.
I don't agree. I think that would be a huge victory. It would certainly not be pyrrhic in the sense that we would lose anything.
There is a direct line from Trump tweeting about how Pence lacked the courage to deliver them the presidency by certifying what Trump called a "fraudulent vote", to Trump's followers chanting "Hang Mike Pence" in the halls of the Capitol when they got the message.
It’s a fundamental necessity of the business that social media platforms be able to deliver enough good content that users want to join and engage while filtering out enough bad content that users don’t get fed up and leave.
So they implement all kinds of mechanisms designed to promote and emphasize “good” content and minimize “bad” content. It’s important to understand that “good” means good for business and “bad” means bad for business, and of course there are sometimes short-term vs long-term considerations to balance.
Mechanisms like search rank algorithms, retweets, comments, likes, content policies, moderation, flags, bans, etc.
If the government takes some of these mechanisms away from a social media platform they will (1) fundamentally change the nature of the social media platform — quite possibly breaking what made it a desirable place to share in the first place; (2) prevent these companies from making business decisions in their own business interests.
Imagine HN with no moderation, or your favorite narrow-interest subreddits full of spam and political posts with no one allowed to moderate them or ban the crap posters. It entirely defeats the purpose.
Is this really what we want?
Also, remember that platforms much more permissive than the big social media platforms exist that are accessible over the same internet, not to mention plain old web sites. So what exactly is the need to force, say Facebook or Twitter to have very permissive content policies?
I think what many people concerned with this actually want is for social media platforms to continue to moderate and filter bad content and promote good content, but they want to tell the platform what “good” and “bad” mean according to their own opinions rather than letting them determine that for themselves.
The 'good' here is what people 'want', at least what they think they want and not what they 'need'. Overtime the users unconsciously or consciously(influencers) realize that the system overwhelmingly favors this behavior and resort to only broadcast content which people want irrespective of whether its factual or end-up hurting those who consume it in the long run or even if they themselves don't believe in that content.
You can verify this yourself, Subscribe to a Twitter topic of your expertise and check the home screen(mobile app) (or) if you're brave enough check the home page of LinkedIn and you'll see the algorithm's overwhelming bias towards conformity.
You can think of "discovery feeds" as an addon - and maybe even not provided by the same company providing the social network itself.
Cherry on top? You would have to actively subscribe to those feeds, forcing them to be interesting/relevant.
Even then, with zero moderation, feeds that people subscribe to could still get overwhelmed by trolling and vitriol.
What do you think the main feed and discussion might look like if moderation was not allowed?
Likewise for any special-interest sub-reddit you might subscribe to.
I think these things will be very substantially changed, for the worse, if content moderation isn't allowed.
> Is this really what we want?
There is a pretty simple solution to this. Leave these decisions in the hands of an individual user's choice.
I am sure that there would be some kinks that need to be worked out. But coming at these problems from the perspective of allowing a user to choose different spam/block parameters or algorithms, of their own choice, and letting the market decide what users really want.
I am sure that most people would voluntarily choose certain general spam blocking settings, without the need for the platform to make every decision for the user.
If social media platforms are considered common carriers they will have a 'duty to serve." They will need to provide their service to anyone not judged by a court to be breaking the law.
That includes spam, pornography, and many forms of online abuse and harassment. There is a strong case to be made that ISPs should be common carriers, since they don't (and should not) know what is contained in the packets they transport.
But treating Twitter or Facebook or Hacker News as common carriers makes no sense. They are not "carriers" who are neutrally transmitting data they have no control over.
That's not what they want to be, and that's not want their users want them to be. 4chan has it's place, but most people don't want to whole web to look like that.
Keep in mind the the law is a blunt interment. Claiming small social media sites don't need to worry about laws restricting big social media sites, is like claiming that honest citizens don't need to worry about laws targeting terrorists.
Is the domain name really that important, so long as the content is available?
Personally, I suspect that any law like this would result in Twitter simply removing the chronological timeline altogether.
In addition, although the paper's title says "social media", Volokh also suggests the services which banned Parler – Google and Apple's app stores, and AWS – could also be subject to common carrier regulations. For those kinds of services, recommendation algorithms and domain names are irrelevant.
 While I'm at it, one of the best summaries of the whole neutrality debate is from Laura Granka , and Goldman has his own summary here 
 Granka, L. A. (2010). The Politics of Search: A Decade Retrospective. The Information Society, 26(5), 364–374. https://doi.org/10.1080/01972243.2010.511560
 Goldman, E. (2011). Revisiting Search Engine Bias (SSRN Scholarly Paper ID 1860402). Social Science Research Network. https://papers.ssrn.com/abstract=1860402
All just poking holes in the idea that social media can be a common carrier while the ISPs are not. Somehow the literal carrier avoided becoming the common carrier.
It would be illegal for Facebook to arbitrarily prevent someone from accessing Facebook, and legal for Comcast to arbitrarily prevent someone from accessing Facebook. And back to my original proposal, if Facebook and Comcast came to some sort of deal, perhaps Comcast could block unwanted user on behalf of Facebook.
There's something between hosting and recommendations, which the paper suggests common carrier status could also be applied to, and that's subscriptions. So youtube wouldn't have to recommend Alex Jones' videos, but if people subscribe to his channel, it would have to show his videos to them.
> Anybody can already find hosting somewhere.
Even if that's the case, taking away their current hosting disrupts people's speech.
"Big enough" by EU standards is where there are less than four competitors of reasonable size and reach. The US tends to tolerate a higher threshold. Amy Klobuchar's "Antitrust" book suggests 40% market share as the threshold.
On page 8:
> Why does the law preclude the companies from doing this—even when they’re not monopolies, such as landline companies might be, but are highly competitive cell phone providers?
Now, media regulation is an entirely different regulatory approach: It has more normative roots, and in some cases is even precautionary: Allowing sanctioning in the face of plausible threats. That's a very sharp sword and the reason why platforms fear it.
 For completeness, the definition of "big enough" in the EU is called dominant position and is defined in the Hoffman-La Roche case. It means being so powerful that competitors can no longer act independently.
Beyond that, making sure it's always possible to host content on the Internet would be the most reasonable way guarantee free speech. And content filters would have competition this way as a content filter is a kind of content.
Some products cannot be easily broken up without disrupting the actual product. For instance, Facebook could not be broken up since the value of the product is in large part due to having one significantly sized and unified user base (network effects). But these companies can't have it both ways - they can't claim that they operate in a competitive environment with few barriers for new competition while also claiming that splitting up their user base would destroy their unique product offering.
We also need to be wary of market share arguments, especially given that these companies largely operate in the Bay Area and reflects its values/political culture/etc. This is why we regularly see them enact censorship in lock-step. Even if several companies operate with less-than-majority market share, they can behave as a cartel. That's why we shouldn't treat Twitter and Facebook and Tik Tok as alternatives to each other.
A better alternative might be to simply envision and implement new regulations based on minimum user bases. If your user base is larger than X (to be defined) then you are subject to regulations. Some suggestions could be user bases larger than the [smallest or largest] state by population. A social media platform that has more influence and power than a state government seems like a reasonable target for regulation.
This idea is not new - it has been discussed under different terms, e.g. with relation to "must carry"-rules for cable providers.
To simplify a bit, the argument boils down to the question of liability and responsibility for content curation. Platforms either curate content and are liable; in this scenario, they may be held accountable within media regulatory norms. Or they don't curate and provide equal access, then they aren't liable and free from media regulatory normative standards.
Note that the US' pretty exceptional take on free speech (as an untouchable right) very much complicates this simplified description.
You mentioned the UK BBC in another comment. Here are some examples of BBC censorship:
If the government-run BBC sometimes censors certain topics, I'm confused as to how that same government becomes a watchdog & enforcer over private corporations designated as "common carrier" and not censor.
In other words, what higher power over the UK government forces them not to censor? There's an inherent contradiction enforcing an "equal access" law because the government itself doesn't follow it. This has unavoidable effects on government regulation of the private corporations it oversees.
The "common carrier" designation is easy to implement when communication is point-to-point with paid subscriptions (e.g. telephones) -- instead of broadcast funded with ads or government taxes (e.g. Facebook/Twitter, BBC). There is no government in the world that allows broadcast of any topic without interference.
Public service broadcasters are not about alleviating censorship; they are primarily a means of providing a solid base level of access to information. They cannot reasonably provide access to all information.
Your remarks on oversight are a very valid concern; that's why Germany, for example, has publicly funded (not state-funded; they get a mandatory fee from citizens over which the state has no control) but state-independent public service institutions.
There's much more to the argument, but a central point here is the shift from regulating supply to regulating consumption. One can trivially argue that platforms are harmless because any censorship they implement is not total in the sense of absolute government censorship - you're free to publish your stuff elsewhere.
But today, more and more countries are seeking to ensure healthy consumption of information, and to enable this, they need to intervene in platforms' content curation.
It's a dangerous argument, of course, because this is precisely what totalitarian states are doing. But - and this is important - regulating content is something that democracies have always had to do, e.g. banning libelous content, revenge porn etc. etc.
The paper deals with the supposed distinction between point-to-point and broadcast. In brief, it's not very clear. Consider the postal service: If millions of people subscribe to a magazine, is that point-to-point communication or broadcasting? How is it different from millions of people subscribing to a youtube channel? Or following someone on twitter?
Edit: the difference comes in where there are feeds and recommendations popping up to users who haven't previously subscribed to something. That and the volume/media.
Would it be used almost exclusively by extremists and lunatics? Probably, but I still think it's worth having if only in light of all the HN stories about how some account slip-up at Google utterly ruined someone's life. Everyone deserves an email address they can never lose access to.
The issue is, the extremists and lunatics don't just want email, they want twitter, facebook and all the inherent amplification capabilities of both. Not just a 1 to 1 message, not even a 1 to many messages but full on advertising to potentially interested users the same as the cat pics/videos get.
I don't think there's been a formal announcement yet, but it was mentioned in one of their Matrix Live videos a couple of weeks ago.
Personally, I see pluses and minuses. Yay, free Matrix for everyone. Boo, your government can monitor everything you do on that server.
Countries with strong public service broadcasters (e.g. the BBC in the UK) tend to consider state-run information infrastructure a good idea, at least in a dual system including private corporations.
Countries without public service media see private marked actors as perfectly sufficient.
Needless to say, the US is squarely the second type.
As just one example, it argues that social media companies are not necessarily protected from being compelled to host content they disagree with simply because it would be "compelled speech," as stated in the rejection of recent Florida legislation. There are a number of existing cases where entities were compelled to do just that because they were operating a public space, even if privately owned (a shopping mall for example). Agree or disagree, this is information I wasn't aware of (and probably a lot of readers here as well), so it's interesting information to have.
Also worth noting that the language in this paper is very approachable and rife with footnotes that often point to past legal decisions, if not precedents. Even if you don't agree with the thesis of this paper, it still makes for an eloquent and informative read of one side of the aisle.
 To clarify: Being a scholarly law paper means that (1) it is well-documented and thoroughly researched, but also (2) it may employ words that seem to have a common-sense meaning but really don't. Things such as "fair", "bias", "access", "responsible" etc. have very precise legal meanings that are not readily apparent.
I’ll begin by asking in Part I whether it’s wise to ban viewpoint discrimination by certain kinds of social media platforms, at least as to what I call their “hosting function”—the distribution of an author’s posts to users who affirmatively seek out those posts by visiting a page or subscribing to a feed.
I’ll turn in Part II to whether such common-carrier-like laws would be consistent with the platforms’ own First Amendment rights, discussing the leading Supreme Court compelled speech and expressive association precedents [...] And then I’ll turn in Part III to discussing what Congress may do by offering 47 U.S.C. § 230(c)(1) immunity only for platform functions for which the platform accepts common carrier status, rather than offering it (as is done now) to all platform functions.
On balance, I’ll argue, the common-carrier model might well be constitutional, at least as to the hosting function. But I want to be careful not to oversell commoncarrier treatment: As to some of the platform features that are most valuable to content creators—such as platforms’ recommending certain posts to users who aren’t already subscribed to their authors’ feeds—platforms retain the First Amendment right to choose what to include in those recommendations and what to exclude from them.
Social media effectively "privatized" the content, now you have the content within their walled garden, and only they have monopoly on content filtering and aggregation (within their realm). This allowed them to monetize the advertisements on this content.
So I believe wanting "social media as common carriers" really means going back to the state before they existed, where anybody could host content and anybody could filter content. I don't think it will happen, because then the business model is dead, and the force of capitalism will not allow it (just like there is very little public land now).
I often see arguments saying that someone who is deplatformed/demonetized on these service can just use an alternate service, but I find that to not be the case in practice. Consider that Twitter, Facebook, and YouTube have more users than virtually all nations. Their network effects are core to what the product is, which is why there aren't suitable alternatives (especially when they enact censorship in unison). Telling someone to just go use a different platform is like telling someone that they don't need their power utility, since they can just stick a windmill on their property instead.
Finally, I am greatly concerned that these large privately-controlled platforms are essentially outsourcing government-driven censorship and also violating election laws. For example when conservatives did form their own platform on Parler, AOC called for the Apple and Google app stores to ban Parler after the Jan 6 capitol riot (https://greenwald.substack.com/p/how-silicon-valley-in-a-sho...). If a sitting member of the government pressures private organizations to censor others, it should be considered a violation of the first amendment. Leaving aside the technicalities of law, it is unethical and immoral even otherwise and completely in conflict with classically liberal values. Actions taken by these companies to suppress certain political speech in this manner also amount to a donation to the other side. This isn't recognized as "campaign funding" but it is probably more effective than campaign funding at this point. We need to do a better job of recognizing the gifts-in-kind coming out of Silicon Valley tech companies towards political parties based on the ideas they suppress/amplify/etc.
Too many twitter warriors banging away on keyboards.
You AND SOCIETY will benefit if you check out of it and just meet your neighbors.
Not if your neighbors are staying at home consuming corporate/government approved social media messages.