Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Section 230 is a great thing and the consequences of repealing it or adding extra requirements to qualify for its protection would be unequivocally bad.

From a pure logic standpoint, there is something that bugs me with the argument that moderation isn't the same as producing content. A website's moderation policy could be that, on any given day, any message other than "The archduke is a corrupt autocrat responsible for the assassination of members of the political opposition" -- which might be libel -- will be deleted by the content moderation team. The next day, the moderation policy might be that the only message that will not be deleted by the moderation team should be "Furthermore, the archduke is having a secret affair with his first cousin".

The platform doesn't publish any content in this scenario, it merely awaits for some random anonymous account, on which no liability will be assessible, to post the unique string of characters that is allowed by the content moderation policy on that day.

Of course, what I'm setting up here is a bit of a beard fallacy [1]. Laws are not enforced by algorithms and a human (in particular a judge) is perfectly capable of distinguishing between a content moderation policy that bans profanity and one that bans all but one string of character.

Beard arguments are everywhere, human-made categories always have fuzzy boundaries. However, some categories have sharper boundaries than others. There is very little ambiguity as to what "running a red light" means for a motor vehicle, even though Buridan's paradox [2] tells us that there could be. There may be a continuous curve, but the slope is very steep, creating a sharp distinction between "running a red light" and "not running a red light".

The slope from moderation policy to content publisher is not as steep of a slope. While everyone might agree that banning, say, profanities is not really a form of speech, and while everyone might agree that my cute hack above is really just speech, there are many intermediate points where reasonable people might disagree.

Generally these uncharted areas get progressively cleared up through lawsuits creating precedent which creates uncertainty, and introduce layers and layers of complexity.

My own feeling is that when the delineation of a category requires an accumulation of special cases, exceptions, distinctions, clarifications, it generally means that it does not map to something fundamental or important. A large part of software engineering consists in finding the right abstractions to think about a problem, and when one leaks that badly, it's generally and indication that one is thinking about the problem wrong. My hunch is that the fundamental issue lies in the concept of liability for libel in the first place.

[1] https://en.wikipedia.org/wiki/Sorites_paradox [2] https://en.wikipedia.org/wiki/Buridan%27s_ass#Application_to...



>A large part of software engineering consists in finding the right abstractions to think about a problem, and when one leaks that badly, it's generally and indication that one is thinking about the problem wrong.

Any law about content moderation is ripe for being gamed, so I don't think a hard definition can ever suffice. Think of libel: very hard to prove because it requires clear falsehoods, intent, and expression. So plenty of people, especially politicians, have learned to walk the line. They can legally convince people that their opponents are space aliens who stomp on apple pies and interrupt baseball games.

A better example for this "adaptive" kind of law is online gaming. If a player uses an exploit to gain an advantage, the developer will often correct that exploit. When developers stop pushing bugfixes, the game often ends up being dominated by whatever exploits were left. For law, legal precedent is like a bugfix.


I don't necessarily disagree with anything you said. To continue your analogy about games, if your games requires constantly coming up with new rules and the rule book ends up being thousands of pages long, it's a poorly designed game. How many bug fixes have been made to chess (the answer is not zero, but also not many either).

The point is that a model which requires constant bug-fixes is one that generalizes poorly which is a symptom that you are capturing the wrong abstraction.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: