> Advertisements deserve more strict regulation than general user-generated content because they tend to reach far more people.
They deserve strict regulation because the carrier is actively choosing who sees them, and because there are explicit fiscal incentives in play. The entire point of Section 230 is that carriers can claim to be just the messenger; the only way to make sense of absolving them of responsibility for the content is to make the argument that their conveyance of the content does not constitute expression.
Once you have auctions for ads, and "algorithmic feeds", that becomes a lot harder to accept.
>The entire point of Section 230 is that carriers can claim to be just the messenger
Incorrect, and it's honestly kinda fascinating how this meme shows up so often. What you're describing is "common carrier" status, like an ISP (or Fedex/UPS/post office) would have. The point of Section 230 was specifically to enable not being "just the messenger", it was part of the overall Communications Decency Act intended to aid in stopping bad content. Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything. The specific fear was that this left only two options: either ban all user content, which would brutalize the Internet even back then, or cease all moderation, turning everything into a total cesspit. Liability protection was precisely one of the rare genuine "think of the children!" wins, by enabling a 3rd path where everyone could do their best to moderate their platforms without becoming the publisher. Not being a common carrier is the whole point!
> Congress added Section 230 in direct reaction to two court cases (against Prodigy and CompuServe) which made service providers liable for their user's content when they didn't act as pure common carriers but rather tried to moderate it but, obviously and naturally, could not perfectly get everything.
I know that. I spoke imprecisely; my framing is that this imperfect moderation doesn't take away their immunity — i.e. they are still treated as if they were "just the messenger" (per the previous rules). I didn't use the actual "common carrier" phrasing, for a reason.
It doesn't change the argument. Failing to apply a content policy consistently is not, logically speaking, an act of expression; choosing to show content preferentially is.
... And so is setting a content policy. For example, if a forum explicitly for hateful people set a content policy explicitly banning statements inclusive or supportive of the target group, I don't see why the admin should be held harmless (even if they don't also post). Importantly, though, the setting (and attempt at enforcing) the policy is only expressing the view of the policy, not that of any permitted content; in US law it would be hard to imagine a content policy expressing anything illegal.
But my view is that if they act deliberately to show something, based on knowing and evaluating what it is that they're showing, to someone who hasn't requested it (as a recommendation), then they really should be liable. The point of not punishing platforms for failing at moderation is to let them claim plausible ignorance of what they're showing, because they can't observe and evaluate everything.
They deserve strict regulation because the carrier is actively choosing who sees them, and because there are explicit fiscal incentives in play. The entire point of Section 230 is that carriers can claim to be just the messenger; the only way to make sense of absolving them of responsibility for the content is to make the argument that their conveyance of the content does not constitute expression.
Once you have auctions for ads, and "algorithmic feeds", that becomes a lot harder to accept.