Hacker News new | past | comments | ask | show | jobs | submit login
Section 230 Explained (arstechnica.com)
182 points by gok 5 days ago | hide | past | favorite | 263 comments





If you haven't read Section 230, go do so now. It's enabled the development of the modern internet as we know it, and the meat is only 3 sentences. The rest is preamble or interactions with other laws.

> (c) Protection for "Good Samaritan" blocking and screening of offensive material

> (1) Treatment of publisher or speaker

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

> (2) Civil liability

> No provider or user of an interactive computer service shall be held liable on account of-

> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or

> (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).

https://uscode.house.gov/view.xhtml?req=(title:47%20section:...


There is good reason to reform section 230. Right now courts are applying the liability so broadly that companies aren’t liable even after they are notified about illegal behaviours on their site. In a court case involving Grindr refusing to take down a profile created by someone’s ex-bf that was being used to harass him, their refusing to so even after contact by lawyers was protected under section 230 and the case was thrown out. I’d be all for a modified version of section 230 that required sites to have a contact email and made them liable if they don’t address certain issues in an appropriate time period.

It’s also worth mentioning that before section 230 if you didn’t moderate you weren’t liable so in certain senses it’s a censorship bill rather than a free speech one since it protects removing speech. That being said I do understand the need to moderate sites and remove some content, hence my proposal of the modified version rather than a call for its elimination entirely.


> a modified version of section 230 that required sites to have a contact email and made them liable if they don’t address certain issues in an appropriate time period.

So recreate the DMCA Takedown process but for speech? Do you think the DMCA is working well for copyright holders and users?

The abuse of this would be massive. Let's say I don't like the comments you wrote so I email the host of the forum they're on and say they're defamatory. Now the host has to decide if they are defamatory (which is often a tough call even for lawyers) and also weigh the risk that I might file a costly lawsuit anyway. Or they just delete the comment.


They always have the option of allowing everything and ignoring emails -- something the dcma doesn't offer.

To me that problem can be dealt with between the user, their ex-bf and local law enforcement. Grindr need not be involved, and no special internet laws need apply.

That’s a case of harassment and possibly some form of identity theft.

I don’t find your proposal tenable and it would obviously be prone to abuse.


That's only true of Grinder is obligated to positively ID a US resident who provided the content.

there are a lot of reasons why grindr would not want to do that. grindr is not tinder and it's not okcupid a BIG part of the service is providing psuedonymity to it's users. attitudes about homosexuality in the us and especially internationally make it a pretty significant liability for a service to start 'outing' people. if you can be ID'ed in the US you can be ID'ed in singapore or chechnya. and even besides that there's a great degree of cultural complexity at play and even in 'tolerant' places many men are discreet because they don't want to be branded as a fag. if i ran grindr i would've made the same call

> Right now courts are applying the liability so broadly that companies aren’t liable even after they are notified about illegal behaviours on their site.

Right. While the text of the bill doesn't remove distributor liability (only publisher/speaker), its applied as doing that to, and giving even actively-moderating sites only neutral platform liability. There may be justification for this in legislative history and legal construction, so it may not be a pure judicial mistake, but from a policy perspective its at least arguably an overcompensation that Congress should correct, a correction which would be much more modest than many of the reform/repeal Section 230 proposals but probably hit a better point in terms of dealing with the worst problems without creating more than it solves.

> It’s also worth mentioning that before section 230 if you didn’t moderate you weren’t liable

That's not entirely true. If you did actively moderate, you were liable as publisher, but if you didn't actively moderate you would still likely be held liable as a distributor.

> in certain senses it’s a censorship bill

The entire Communications Decency Act was a censorship bill and the express purpose of 230 as part of the CDA was to encourage sites to do moderate content instead of taking a hands-off position.

OTOH, so long as there is liability for knowing-unlewful content (distributor liability) and antitrust enforcement, I think that's a good thing and reduces the social pressure for government to push the maximum line the courts will let it get away with in terms of government content restrictions.


Grindr didn't refuse to take them down; the ex-bf kept creating new ones. That's a whole different problem. Grindr claims they were monitoring for new profiles, but that some slipped through their checks.

In that scenario, I don't know what a reasonable level of effort for Grindr to exert is. It seems infinitely unreasonable to make them liable for any failure; there is a determined person on the other end that will probably eventually find some way of adding spaces or using symbols instead of letters, or using weird UTF-8 symbols or something.

I don't see Grindr as failing there; while they probably could have done more, they seem to have made a best faith effort to stop it. The police should have intervened and filed charges against the boyfriend for stalking and harassment. Even failing that, I would have filed a civil case so I could subpoena the logs from Grindr and used them as evidence in a restraining order.

Grindr is not the appropriate party to resolve this. I don't call Ford when people drive their trucks like assholes. I don't call Glock when somebody shoots someone. If you're going to call Grindr, you might as well call their ISP and Google too, see if you can get the ISP to block Grindr or get Google to route Grinder to localhost. They're complicit in enabling this too.

> Right now courts are applying the liability so broadly that companies aren’t liable even after they are notified about illegal behaviours on their site

This, to a degree, makes sense. They haven't been notified about illegal behavior on their site, they have been notified of allegedly illegal behavior on their site. Grindr is well within their rights to say that they don't believe that the profile violates any laws. For example, it says that he attempted to file for a restraining order and was denied. So that court either found that what the ex-bf was doing wasn't illegal, or that he failed to meet the requirement of a preponderance of evidence. So he failed to convince a judge that his ex was more likely than not stalking him. Should Grindr be required to take action on a claim that is more likely false than true?

> I’d be all for a modified version of section 230 that required sites to have a contact email and made them liable if they don’t address certain issues in an appropriate time period.

That's fraught with issues. What counts as addressing the issue? Is it banning the profiles as people identify them? Is it banning the personal info from appearing in profiles? Do they have to hire a group of people to memorize all the bits of bad data, and check new profiles and profile updates for those snippets, as well as any clever encodings that a computer wouldn't recognize?

What is an appropriate time period? Is it some flat period, like a week, regardless of what changes are required? Does it vary, and if so, who decides what's a reasonable amount of time?

This is not to mention that literally none of this goes through a court, which is terrifying and exceptionally prone to abuse. Of course, it could go through a court, but we already have laws and remedies for this situation in court.

Cases like that make it seem really cut and dry, like there would never be a grey area. Even ignoring cases of outright fraud, what do you do in situations where one side feels victimized but it doesn't actually meet any legal standards? Like if person A always replies and argues with person Bs tweets. When person B blocks person A, they make a new account. Person B says they feel harassed and wants to force Twitter to do something about it. Person A says that Twitter is a public forum, and that if people don't want other people to disagree, they should use a more private forum. It never goes further than that. No threats, no doxxing, no real life interactions. Person A is probably an asshole, sure, but I don't think section 230 grants you immunity from assholes. I don't think it counts as stalking or harassment either (though I could certainly be wrong, not a lawyer). Should we really allow Person B to force Twitter to do something without having a judge involved? I would really rather not give the Twitter lynchmobs yet another way to dispense their own vigilante justice.


not only that, plenty of people just use blank accounts with zero info and only exchange pics and identifying information in dms. the malicious ex could just do that with the same effect. if i'm a psycho bitch who scrawls your name/number and compromising information in hundreds of truckstop bathrooms across the tristate, are shell and conoco liable? that would be absurd and it's equally as absurd in this case.

I don't think it would be world ending to get rid of Section 230. I almost would like to see it happen, if only because it would have precisely the opposite effect expected by all the people whining about being censored. Though, I suppose you can't be censored if the channel itself is extinguished.

More practically, the technically literate would go back to the world of Usenet, mailing-lists, and minimalistic forums like HN, hopefully inventing distributed reputation systems in the process. I have this vague idea for PGP web of trust-like signing of Usenet posts (published as hidden posts when readers +1/-1) which are then SPAM scored based on the depth of the attestation chain to the reader's own trusted posters, which may have been seeded from one or more centralized databases of group maintainers, similar to the current registration system for moderated Usenet groups except you could freely choose alternative registrars.


Would forums like HN survive? I can think of a few incidents where malicious information about people made the front page then turned out to be false. Is HN prepared to defend against lawsuits about that? Is HN prepared to lose lawsuits about that?

It sounds like you're basically suggesting that making the internet useless is a good thing, because maybe something something cool will come out of the ashes and there's a chance it could be even better after a bunch of extremely hard and broad problems are solved. I don't like those odds.


HN already has moderators who do a very good job of filtering posts in a timely manner. HN's exposure to liability for libel would be rather minimal. People and companies are exposed to legal risk all the time, everywhere they go, and somehow they don't curl up into a ball and die of starvation in their basements.

Big, diverse sites like Facebook and Twitter need Section 230 because they can't effectively use human moderators to sift through the content. They have to rely on machine learning, which has false negative rates magnitudes higher than a human. Yet at the same time, they're constantly trying to shape and edit and, basically, narrate the user content, as part of their monetization strategy. That's their dilemma.

Moreover, the distinction between publisher and distributor will still exist. The alternative to strong moderation is no moderation--you're just a distributor, like a Usenet node or the telephone company. But that's more difficult to monetize. (Of course, the legal landscape would be more nuanced than that--traditional libel law wouldn't demand a simple dichotomy between moderation and no moderation.)

Without Section 230 companies would have a more difficult time trading profit potential for legal liability, but it would still be done. Newspapers, write-in columns, bulletin boards, and other forums were around for centuries, all the same exposed to libel law. Even the internet was around for decades prior to Section 230.


> HN's exposure to liability for libel would be rather minimal.

I don't understand how you reached that conclusion. They could be sued over any comment that appears for any amount of time. There are definitely comments that have appeared on HN that are libelous.

Moreover even if they pre-screened every comment before it was posted with a team of lawyers who never make any mistakes, they'd STILL have to worry about defending against frivolous lawsuits. Would it even be possible to buy liability insurance for a forum in this world? It would cost a fortune.

And this is for a site that has the resources to have full time moderators. Smaller sites are even worse off.

I don't see how anyone could practically operate any forum or discussion board or comments section that allowed people to post messages in real time.

>Even the internet was around for decades prior to Section 230.

Sure and sometimes your ISP got successfully sued because someone didn't like a comment posted on a message board they hosted.


> have appeared on HN that are libelous

Not just have appeared, but which are still on display.

Sometimes the difference between libellous and a critical statement protecting the public is purely the difference of the statement being true or not.

This is not something a moderator is necessarily in a position to be able to judge but it's critically important to a community that its members can communicate true negative facts about other members.


> Sometimes the difference between libellous and a critical statement protecting the public is purely the difference of the statement being true or not.

Or whether the person making the statement knew that it was false at the time (or should have known). That won't protect the statement from being libelous, but it limits your maximum liability to actual damages that you can show (which in many cases is only going to be the lawyer fees; in the case of a widely repeated libelous statements, how do you determine what harm came from which sites?).

> This is not something a moderator is necessarily in a position to be able to judge

I would make that a harder statement. Moderators cannot tell whether a certain post is libelous. Even assuming that the moderator knows whether the post is factually true or not (and there are a lot of accusations where, when they come out, no one knows for sure who is telling the truth), whether it's libelous depends on whether the person in question is considered a public figure, and whether the person that posted it did enough fact checking to be deemed sufficient in attempting to prove or disprove it. The only person that can determine whether the person being defamed is a public figure, and whether the burden to verify the facts was met, is a judge with jurisdiction over the case. Anything else is just people guessing at how a judge would interpret this case, which is fraught with problems, including and up to that two judges who have jurisdiction would disagree on some facet of that.


Worse yet: sometimes the difference is just who happens to be on your jury that day!

> They could be sued over any comment that appears for any amount of time. There are definitely comments that have appeared on HN that are libelous.

1) You could be sued now for [potentially] libelous comments you write on HN. What's the average wealth of HN posters? How many times has HN had to field user account disclosure requests so commenters could be sued?

2) There are scenarios where HN could be sued now for [potentially] libelous material. For example, in the way moderators reword titles. Not sure how likely they would be to succeed, but it's certainly plausible, and it would be relatively cheap for a lawyer to test the waters. I'd be curious to see how many letters Y Combinator has had to field regarding its content. I suspect greater than 0, but still relatively few. Do its lawyers toss them in the trash, discounting to $0 the risk of liability? I doubt it--while they may consider the risk low, it's still something, and that something presumably effects HN's policies today.

A few months ago I learned a memorable phrase from an HN comment: think in probabilities, not possibilities. Regarding Section 230, most people seem to be in a mode of thinking where they simply compare a world with existentially oppressive liability vs no liability whatsoever. The world doesn't work that way, not even U.S. law. We're all subject to the possibility of financially existential liability every time we drive a car, but we're not crippled by it. How many Silicon Valley engineers with million-plus dollar homes and assets even have umbrella coverage? While I suspect the number is far fewer than what would be rationally called for, the reason is nonetheless because the probabilities are far less ominous than the possibilities.

Would HN's liability exposure grow? Absolutely. Would their legal costs, including possible settlements, increase? I would think. How would the site change? It's hard to say, but I'll go on record as saying that I don't think it'd be taken down, and I seriously doubt there would be many, if any, substantive changes to current policies and practices.

> Sure and sometimes your ISP got successfully sued because someone didn't like a comment posted on a message board they hosted.

To be clear, my only claim is that I don't think it would be the end of the internet or even social media. It might be the end of Twitter and Facebook as we know it, but the U.S. grants to participatory websites one of the, if not the strongest defenses to libel liability in the developed world, and yet the internet works much the same everywhere else lacking such a strict defense. Likewise, many people consider civil tort liability entrepreneurially oppressive in the U.S., and yet private enterprise--grocery stores, manufacturers, schools, etc--exist much the same here as they do elsewhere, especially in other developed countries. In fact, often they willfully subject themselves to more risk than they would elsewhere. (That's one benefit of a system that relies on private suits as opposed to regulatory mandates or criminal sanctions.) And yet the worst figures I've seen for the supposed comparative cost to the immensely successful U.S. economy of it's overly litigious civil legal system is something like 5% of GDP.

There's alot of hyperbole and hand-wringing surrounding this issue, and a big reason, IMO, for it relates to our contemporary, radical narratives regarding Free Speech on the one hand and American litigiousness on the other. While anxiety regarding both may be rooted in a kernel of truth, the full truth and reality--legal, political, social--doesn't support the extreme reactions and doomsday predictions.

While I'm not advocating for repeal of Section 230, I'd trade it in a heartbeat for legislative voiding of Qualified Immunity, if that sort of compromise was on the table between Democrats and Republicans. That's the sort of flexible, pragmatic thinking I wish there was more of in our public discourse. But it can't happen if we're all single-issue voters on every issue, which is what absolutist, possibility-not-probability thinking has turned us into.


We won't know the actual exposure until it ends up in the courts. Remember, we're talking about removing the good faith liability protections. Maybe a few thousand views of a libelous comment is enough, even if it was eventually removed. Either way, someone has to hire lawyers to go defend this, so it's not free.

We should also expect new bad actors to take advantage of this. As long as they can spam libel faster than moderators can delete it, they can force the site to shut down or risk the lawsuits. While I'm sure YC has its share of enemies deserved or not, even perfectly innocent people are attacked online every day for no reason at all.


There's no reason to assume that an operator would be liable for libel spam, since they lack mens rea intent. Libel would only be in play if they intentionally refused to take down content or tried to extort people with it.

I think the odds are pretty good. There's a lot of smart & motivated people who really like the internet, who would probably go a long way to replace it.

Why aren't those people interested in working on that today?

The plan is not to repeal Section 230. The plan is to make protection contingent on appeasing political appointees at the FTC.

Whoever controls the FTC will be able to (and will) pressure the major social media networks into acting as a propaganda arm for their political party.

As dystopian as FB and Twitter are today, in this case, the medicine is poison.

See https://www.hawley.senate.gov/senator-hawley-introduces-legi...


Hawley's plan is just one of them.

Some people, including both Joe Biden and Donald Trump, have called for a complete repeal of Section 230 at various times in the last year.


> Some people, including both Joe Biden and Donald Trump, have called for a complete repeal of Section 230 at various times in the last year

AFAICT, that characterization of Biden's position is based entirely on a single oral interview response, which quite arguably was not saying that the law should be repeated but that, on the facts of Facebook's specific conduct, and that of some unspecified other platforms, their conduct should be excluded from Section 230 protections because they were knowingly engaging in misinformation.

Note that Section 230 protections in case law are broader than what is provided on the face of the statute; in addition to the "publisher or speaker" protection in Section 230(c)(1); courts have extended it to also prevent liability as a distributor for content, IIRC by synthesizing 230(c)(1) and the good-faith blocking rule in Section 230(c)(2) and some legislative history to add the not-express-in-statute rule that sites are also not liable even as a distributor for the material they don't block, with some exceptions. Biden's statement is consist with restricting Section 230 to what is says on the face, which would be only removing publisher/speaker liability, not distributor liability (which comes about when the distributor has knowledge or legal notice of the legal problem with the content.)


IANAL but The First Amdment, not Section 230, is what lets me say that Donald Trump is in league with reptilians to enslave all Americans who prefer pork to beef.

Facebook spreading the above, or other similarly ludicrous information, is likewise protected.


> IANAL but The First Amdment, not Section 230, is what lets me say that Donald Trump is in league with reptilians to enslave all Americans who prefer pork to beef.

The First Amendment does not protect you saying that if it is false and you have knowledge that it is false or are grossly reckless in saying it without confirming its veracity, see, New York Times v. Sullivan.

Section 230 is what prevents Facebook from sharing your liability, as a publisher, if they relay your saying that in the conditions in which you would be liable for defamation.


Have you seen the "Eat Shit Bob" episode of John Oliver? They say many not true things about a coal executive, all harmful to his reputation. But no serious person could take them at all to be truthful. And he isn't even the President.

Section 230 repeal would just mean lawyers name Facebook under rule of the deepest pockets. It doesn't stop Facebook from saying things about politicians.


I think that's a very generous reading of what Biden said, especially considering the followup question and the fact that he's declined to clarify his position in the intervening 8 months. Search "230" on this page to see it https://www.nytimes.com/interactive/2020/01/17/opinion/joe-b...

Anyway my point was that there have been calls to repeal 230 from across the ideological spectrum.


> I think that's a very generous reading of what Biden said, especially considering the followup question

In the followup, Biden reiterates the conduct condition and the knowing falsehood criteria, which reinforces rather than weakens the impression that he is calling for the protections of Section 230 to be inapplicable to the actor/action in question due to their knowledge, a distributor-like standard, and not for the law itself to be repealed generally.

I suppose you could read the first line of his response to the second followup ("He should be submitted to civil liability and his company to civil liability, just like you would be here at The New York Times") as calling for publisher-like liability if you ignore the explicit references to actual knowledge as the basis for nonprotection in both the original response and the first followup, but I do think that that is the more strained interpretation, not the less strained.

> and the fact that he's declined to clarify his position in the intervening 8 months.

Why would you assume that he doesn't want to clarify because he wants a full repeal? Its not as if there isn't a constituency for a full repeal, especially on the right, and a key part of Biden's strategy is holding together a Bernie Sanders-to-Bill Kristol left-right alliance against Trump. Keeping disagreements the details of his position on the issue (which is clearly peripheral to his platform, on the grand scheme of things) out of the reasons for people to not feel comfortable with him is as plausible a motivation for that regardless of which side of the full-repeal-vs.-reform his preference on 230 sits on.


Do you have links to actual plans from other folks? Hawley's is the only one I can find an actual draft bill for.

Brian Schatz and John Thune have one: https://www.schatz.senate.gov/imo/media/doc/OLL20612.pdf

The DOJ has one: https://www.justice.gov/opa/pr/justice-department-unveils-pr...

The Whitehouse has a somewhat bogus EO https://www.whitehouse.gov/presidential-actions/executive-or...

There have been a bunch of attempts to rewrite and at least a few attempts to just repeal it from both Democrats and Republicans.


Thanks.

That's nonsense. Showing "their algorithms and content-removal practices are politically neutral" is not an insurmountable bar. It's just inconvenient for Big Tech's supporting interests.

Really? You think we can here in this thread all agree to what it means for an algorithm or content-removal practice to be "politically neutral"?

If so, please go ahead! But I seriously doubt it. This is a thing political philosophers argue about in journals to this day, that lawyers argue about in SCOTUS cases to this day, and that has been litigated to death in thousands of HN threads over the years.

The question of what "politically neutral" means is perhaps the MOST political question there is. The delineation of political speech from non-political speech defines the playing field.

And even setting aside genuine disagreement, politics does not operate on good faith. It operates on power. In practice, the bill does not outline specific criteria. So "politically neutral" will mean whatever the FTC wants it to mean. Which means it will mean whatever the appointees of the FTC chair want it to mean.

Josh Hawley, of course, knows and understands how power works. He would not be proposing this bill if the big tech companies were right-biased. Democrats also understand how power works. So, in this counter-factual world of right-biased social media, it would be Democrats clamoring for federal intervention and Hawley decrying the "Democrat attack on the most successful American companies". Do you really believe otherwise?


Exactly. Also, people have a stronger uncomfortable negative reaction to news they don't like than a positive reaction to news they agree with, and extremists think even neutral descriptions of reality are biased against them, so if you show a neutral selection to a non-neutral person they're likely to see it as biased because it doesn't align with their perception of what the proportions should be. Nobody will ever agree on what "neutral" means, which means most likely it'll mean "biased in favor of who currently has political power".

Exactly. Imagine a republican-backed FCC arguing that, because more voters in the US are democrat, it's not politically neutral to try to show a news article to everyone in the US and the algorithm should try to show it only to an equal number of republicans and democrats.

> You think we can here in this thread all agree to what it means for an algorithm or content-removal practice to be "politically neutral"?

Well, there's a simple answer, but I doubt we'll agree on it. It is impossible for a content-removal practice, algorithmic or otherwise, to be politically neutral. Any such practice will involve (whether implemented case-by-case or encoded into the design of the algorthm) judgements of a political nature and with political impacts.


> It is impossible for a content-removal practice, algorithmic or otherwise, to be politically neutral.

Right. My point is that Hawley's whole premise of a "politically appointed political neutrality committee" is absurdly transparent.


I'm not a republican, nor a Trump supporter, but I disagree. I think the courts would be able to create a body of case law over whether a removal was due to a post being "violent, obscene or harassing" or for some other reason.

We can't be terrified of regulating platforms that have massive amounts of control over what most people see or hear about.


> I think the courts would be able to create a body of case law over whether a removal was due to a post being "violent, obscene or harassing" or for some other reason.

1. Maybe, but that's not what Hawley's bill does.

2. Leaving inherently political questions up to the courts invites politicizing the courts -- something that's already happened and that, if it continues apace, threatens to delegitimize and gridlock the entire federal legal system.

3. Given that you're not a Trump supporter or Republican, perhaps you should review the last 20 years of federal judicial appointments before placing so much faith in the courts...

> We can't be terrified of regulating platforms that have massive amounts of control over what most people see or hear about.

Agreed. I think there are lots of reasonable approaches toward regulation and/or self-regulation. The ability of customers to choose from a marketplace of recommendation algos (or implement their own) is the obvious market-based solution.

However, I do not think a politically appointed committee whose job is to define political neutrality is a reasonable approach. And I think that leaving inherently political moderation choices up to the courts would be even worse -- at least FTC chairs aren't lifetime appointments, and at least politicizing the FTC won't deteriorate public trust in the one portion of the federal government that is not yet perceived as nakedly partisan.


Very simple. No primary moderation action should be made based on human input. Automated moderation should look for identifiable harms (i.e. illicit content, directed threats, terrorism), and absolutely nothing should be removed or blocked based on vague and nebulously defined concerns over "misinformation". Voila -- political neutrality in moderation.

> No primary moderation action should be made based on human input

1. That means no HN.

2. I normally don't have to remind people of this at places like HN, but... algorithms are written by... humans! Supervised algos use data labeled by... humans!

> Automated moderation should look for identifiable harms (i.e. illicit content, directed threats, terrorism)

Why do you list terrorism separately from directed threats?

What is the line/difference between "terrorism" and an "undirected threat"?

Are militia groups that don't make directed threats terrorists? Are radical religious groups that don't make directed threats terrorists? What if they are run by actual terrorists but none of the speech amounts to a directed threat?

Speaking of which, what is a terrorist organization? Is the KKK? What about small white nationalist or black power militia groups? What about QAnon? What about antifa? What about BLM? What about Westboro Baptist? What about the Black Panthers?

There are people -- elected officials -- who think each of those is a terror organization.

So, defining terrorist organization is absolutely a political fight. Maybe we avoid that and just talk about directed threats/ Ok. Does that mean that Al Qaeda allowed to operate on FB as long as they don't make directed threats? In fact, that FB is prohibited from not allowing Al Qaeda on as long as they don't make directed threats? That seems like not a solution anyone is going to get behind.

We haven't even gotten past the "obviously terrorism=bad" and we already have to declare whether BLM, QAnon, Westboro, or militia groups are "terrorists". Which some senators believe is the case and is a 100% political question.

> illicit content

Is Ginsberg's Howl illicit? Is a picture of two women kissing illicit? What about non-sexualized nude breasts? What about nude male bodies? What about an erect penis but in a non-erotic context? Will the dominant answers to these questions be the same in 50 years?

Lots of people would say a site that allows pictures of heterosexual kissing but not not pictures of homosexual kissing is obviously taking a political position, but that was outside the realm of "political opinion" when I entered adulthood! Any public homosexual display of affection was obviously illicit.

> absolutely nothing should be removed or blocked based on vague and nebulously defined concerns over "misinformation".

What does vague mean? What does nebulously defined mean? What is the difference between misinformation and libel? What is the difference between misinformation and dangerous information? Is it impressible to remove a video that's targeted at kids and encourages huffing glue as a fun and harm-free activity?

Anyone who has moderated a forum knows that such an algorithm is going to have all sorts of holes and perceived biases. I've never written an automod that some user doesn't get pissed off about.

More generally: that's just straight-up moderation, it has nothing to do with tweaks to recommendation algos.

What if Twitter realizes that people leave the site if they see stuff about abortion but stay if they see stuff about LGBT rights? Again, viewpoint-neutral, Americans just one day start yawning about abortion and really polarize on LGBT stuff. Can they prioritize posts about LGBT rights over posts about abortion as long as the content served up on the preferred topic is viewpoint-neutral and the only algorithmic goal is more lingering eyeballs?

If no to that, how about sports news vs. SCOTUS decision news?

If yes to that, what about COVID case counts vs. Jobs Report numbers?

Even more generally: anyone who's stayed up to date on robust machine learning knows that defining good notions of robustness -- and political neutrality is a type of robustness -- is very much an open problem. So even if we had a precise definition of political neutrality, which I don't think we do, "simply create an algorithm that has that property" is very much an open algorithmic problem.

In fact, there are even some impossibility theorems in this space. So even if we can define neutrality in a perfectly neutral way -- which we can't -- this might be like passing a constitutional amendment that demands a voting system has all of: Non-dictatorship, unrestricted domain, monotonicity, IIA, and non-imposition. You can legislatively demand "the perfect voting system", but the universe is not obliged to ensure the existence of such a thing. Same for some types of robust ML, and no one knows which side of an impossibility theorem some precise-enough-to-code notion of political neutrality might fall on.

Which also brings up the REAL question: are tweaks to recommendation algorithms allowed? Obviously we can't ask FB/Twitter to freeze their recommendation algos -- it's their core product. So. If they notice an "obvious bias" and tweak the algorithm to correct for it, who decides whether that was a biased human intervention or a totally appropriate bug fix? Oh, right, a politically appointed FTC.

I think that "politically neutral" is impossible to formalize in code because it is a fundamental contradiction in terms. But even if it does, I suspect that any reasonable lists of formal specifications might be either mathematically impossible to train a classifier to respect or else at least AGI-complete to actually implement. But if you disagree, I'm happy to clone the Github repo and mess around with your proposal.


>1. That means no HN.

No, it means less 230 protection for HN. Stop conflating this with destruction of the platform, it's becoming like "net neutrality". Remember when tweaking that killed the internet?

>What is the line/difference between "terrorism" and an "undirected threat"? Speaking of which, what is a terrorist organization?

The government has a clear processes to designate foreign and domestic terrorist organizations. [0] Let the actual politicians engage in that political fight. Social media companies can use the result.

>What is the difference between misinformation and libel?

Actual malice? If the standard works for newspapers, why can't it work for social media companies?

>More generally: that's just straight-up moderation, it has nothing to do with tweaks to recommendation algos. [...] If they notice an "obvious bias" and tweak the algorithm to correct for it, who decides whether that was a biased human intervention or a totally appropriate bug fix?

None of this relates. Content should not be removed or suppressed based on any political preference or designation, and that includes a fig leaf of facial neutrality. Whether it's recommended to some and not others != suppression, and it's trivial to show that your systems are based on user action not partisan interest.

These aren't sticky questions at all, they're just ways to navel gaze and avoid the obvious solutions that are inconvenient to certain actors.

[0] https://www.state.gov/terrorist-designations-and-state-spons...


> No, it means less 230 protection for HN. Stop conflating this with destruction of the platform, it's becoming like "net neutrality". Remember when tweaking that killed the internet?

Really? If HN starts only moderating based on "identifiable harms (i.e. illicit content, directed threats, terrorism)" then it'll quickly become a cesspool and lose the community.

On the other hand, if they continue to apply posting guidelines, how many banned users suing HN over "politically motivated censorship" and shit like that do you think it takes for them to decide it's not worth it? Content removed because someone was an abusive jerk suddenly becomes, in plaintiff's claims, content removed because the moderators didn't like their politics. Now spend your $$$$ to defend against that claim!

You're sticking your head into the sand over what the unintended consequences of your proposals would be because you really really really want to believe it would only have the intended consequences that you like.

(Look at what you do when you bring up newspapers: newspapers have extremely limited user-generated content, because of the standards you're proposing extending. Again: there goes HN.)

The only stuff that would survive would be the stuff with big userbases, big pockets, and the ability to throw a lot of moderating power at stuff. Which all sounds to me more like traditional broadcast media - which is historically claimed to be also unfair to the same conservatives who are making the most noise about this stuff. So... good luck with that.


>>> No primary moderation action should be made based on human input

>> 1. That means no HN.

> No, it means less 230 protection for HN.

I'd be fascinated to hear what dang thinks about HN's future existence if this hypothetical law where "No primary moderation action should be made based on human input" applied to HN.

It seems impossible to (a) run a healthy forum or (b) avoid lawsuits or even jail. E.g., can you link me to a github repo that automatically catches 100% of libel? Or even 100% of child porn (or I guess actual porn as a proxy for that problem)? Removing libel and other illegal content without "primary moderation action"s that are based on "human input" is not currently possible.

(BTW: that's NOT what Hawley's bill does! It allows human moderation, you just have to keep the political appointees happy.)

>> What is the difference between misinformation and libel?

> Actual malice? If the standard works for newspapers, why can't it work for social media companies?

Because newpapers have a few journalists. Not hundreds of millions of users.

This has to be done arithmetically or it's financially reckless to allow free-form comments at all. If it's so easy to algorithmically identify libel with 100.00% accuracy, go do it!

Given that there are regularly court cases that hinge on whether some statement raised to the level of libel -- cases that even get appealed and where highly trained judges disagree -- I'm willing to bet the problem is AGI-complete. And then some.

> The government has a clear processes to designate foreign and domestic terrorist organizations. [0] Let the actual politicians engage in that political fight. Social media companies can use the result.

> Content should not be removed or suppressed based on any political preference or designation

So politicians get to define what terrorism means and companies should suck it up and implement whatever the politicians in power decide.

So, if some powerful GOP senator designates BLM a terrorist organization, and social media companies then remove all BLM content, is that not "removing or suppressing based on political preference"? What about pro-2A militias? What about QAnon?

By the way, what about "illicit content"? If some hard core right-winger takes over Twitter tomorrow, can they ban pictures of homosexuals kissing as "illicit content"?

Hawley -- whose bill doesn't even do what you suggest -- is just shifting power over content moderation decisions from companies to political appointees. That's all. It's not neutral, it is based on human input, and it's primarily just a shift in decision making power.

Dressing this up as "neutral" is obvious bullshit. Hawley wants Twitter to understand that his political party is their ultimate master when they choose which speech to amplify on their platform. This is his explicit and openly stated goal. It is about power, not neutrality.

But anyways, this argument is easy to resolve in your favor. You propose not Hawley's bill, but a hypothetical different one where human input can't be a primary consideration. So, you're claiming that a formal specification of the political neturality of an NLP classifier exists. I've build a lot of classifiers, and I don't believe you. Show me the code.


Note that the test actually proposed (not the press release blurb) says:

> The moderation practices of a provider of interactive computer services are politically biased if the provider moderates information provided by information content providers in a manner that [...] disproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint

That means that any service that chooses to do something like suppress known conspiracy theories is going to fall afoul of the proposed changes.


For that matter, a policy of restricting hate speech will currently restrict one party more than another. A policy of prohibiting disinformation that could lead to voter suppression will currently restrict one party more than another. A policy of prohibiting misinformation about the ongoing pandemic will currently restrict one party more than another.

> Showing "their algorithms and content-removal practices are politically neutral" is not an insurmountable bar.

It is when political appointees are the ones who judge if you've cleared the bar.


It seems pretty insurmountable to me. Can you go into more detail about how they'd do it? I've seen a lot of fights where one side says "putting this post up is biased against me" and another says "taking this post down is biased against me", and I'm not sure how Facebook could resolve those disputes with confidence the FCC won't say they did it wrong.

I think that's the point of this bill. They force Facebook et al. to get certified bias-free in a manner that basically makes it impossible to get that certification. So then they get the headlines saying "FCC finds Facebook is biased!!!111"

The bill would be more palatable to me if they simply dropped the immunity, without any certification process. But then demonstrating bias would require winning civil lawsuits, which requires demonstrating damage suffered by the bias and also convincing 12 members of the jury in a unanimous vote... which is unlikely to happen, I think.

(Addendum: actually, the real point of the bill may be to just say "Facebook/Twitter/Google is biased, and I'm doing something about it!" and ignore any actual chance of it making law or being reasonable. It's not like many people actually read details of bills to understand what it does and doesn't say.)


> The bill would be more palatable to me if they simply dropped the immunity, without any certification process. But then demonstrating bias would require winning civil lawsuits, which requires demonstrating damage suffered by the bias and also convincing 12 members of the jury in a unanimous vote... which is unlikely to happen, I think.

No. Killing 230 entirely would allow Twitter and Facebook to be as politically biased as they want.

However, if one of their users libels you, then you could sue Facebook in addition to that user.

And if any Facebook user posts child porn, even for a short period of time, relevant parties at Facebook could face criminal charges for distribution.

You couldn't sue Facebook for being politically biased, but Facebook would be responsible for actual crimes that its users commit.

Hawley's bill says "you won't be responsible for the illegal stuff your users do (i.e., you get 230 protections), but only as long as you keep my political appointees happy."


What does "politically neutral" mean, anyway? What happens if I establish a political party whose solitary goal is to torture babies, kittens and puppies to death? Is that suddenly a "political opinion" which must be protected?

The hypocrisy of the same FCC that said that network neutrality was too much regulation to now demand "political neutrality" is outrageous.

> I don't think it would be world ending to get rid of Section 230

If you're running a start up, how would you feel knowing that if a user uploaded illegal content to your servers, you could be raided in the middle of the night and imprisoned for it?

Only those with billions of dollars to throw at moderation would be able to comply with the law. Everyone else would need to block user content by necessity, or risk having their lives ruined by malicious users.

The net result is that hosting free speech on the internet would be too risky for anyone other than giant corporations. The liability to host users' speech would be far too high for anyone else.


Definitely. If 230 gets repealed, and somebody who had a forum wants to keep running it, they might consult a lawyer for advice. And that lawyer would say, "Don't allow user-created content. It's not worth the risk."

It only makes sense if the user content is the profit-generator and the forum owner ran the numbers and expects to still be profitable even after lawsuits.

So no more hobby forums, YouTube comments (some are good), or internet access in libraries:

>Kathleen R. v. City of Livermore, 87 Cal. App. 4th 684, 692 (2001).[136] The California Court of Appeal upheld the immunity of a city from claims of waste of public funds, nuisance, premises liability, and denial of substantive due process. The plaintiff's child downloaded pornography from a public library's computers, which did not restrict access to minors. The court found the library was not responsible for the content of the internet and explicitly found that section 230(c)(1) immunity covers governmental entities and taxpayer causes of action.

https://en.wikipedia.org/wiki/Section_230


Interesting. However, any manual action (choosing trusted posters, maintained database) is bad for adoption. Facebook and twitter take care of it, you should too.

Perhaps you should use karma and comment interactions to automatically attest the people you interact with. Add a "report" button to disavow certain users. Now there is a positive and negative feedback loop to reduce the workload of attestation.

Caveat: attestation must be stabilized. The existing hierarchies of admin/(super-)moderator work well as trusted posters. On the other hand, picking and choosing your moderator(s) is interesting and will birth new flame-wars and division.

Caveat (2): Adding more crypto explodes the amount of data which must be handled. Especially when every comment and upvote is signed.


What makes Section 230 a complicated and contentious issue isn't the actual details of the law - as you say, that's quite simple - it's the consequences of such a broad, powerful, simple, thing as protecting "interactive computer services" from almost all kinds of legal action for content created by others that they keep up, regardless of what they remove, with few caveats, across a vast swathe of causes for action, business models, moderation policies, etc.

For example, suppose you're an online service Twitbook used by a vast swathe of the world to communicate, and you decide that you want to allow calls to murder politicians you dislike but not (obviously) ones you like. Section 230 gives you pretty good protection from liability over your decisions as to which political figures get threatened with murder. Probably even if one of your users gets inspired and puts a bullet in the head of someone you'd like to see dead.

Or suppose you've got a nice legalized extortion racket seeking out negative claims about people or businesses, getting them to rank highly in Google, not allowing the original posters to remove them, and demanding money from the targets to take them down. Section 230 offers pretty much ironclad protection for your business model by making it nearly impossible to get a court order forcing you to take the content down, meaning you can ensure the only way to make it go away is to pay up, and you can even literally call the fee a charge to remove libellous or defamatory content and there's not a damn thing the court system will do about it. There's a long-running website Ripoff Report that has this as their business model, and they've won every case trying to get them to remove defamatory content without paying them money for the privilege thanks to Section 230. There's also plenty of imitators going after individuals, seeking out (say) claims they've cheated on their partners and charging money to remove them - again, solidly protected by Section 230.


> For example, suppose you're an online service Twitbook used by a vast swathe of the world to communicate, and you decide that you want to allow calls to murder politicians you dislike but not (obviously) ones you like. Section 230 gives you pretty good protection from liability over your decisions as to which political figures get threatened with murder

That's not true on 2 fronts. First, Section 230 requires good faith. That would almost certainly fail to pass the good faith muster, assuming they can demonstrate that it was done intentionally. So civilly, they would likely still be liable. In addition, Section 230 has no bearing on criminal law (it's specifically called out in subsection e). So in the event someone was killed, there would likely be a host of people from Twitter facing charges for being complicit in the death. They are effectively Charles Manson in this scenario, and I think they would have a hard time arguing that selectively filtering messages to expose users to messages encouraging them to kill someone does not count as speech.

I don't see why the second is a terrible issue. They're effectively a tabloid at that point, well known for spreading libelous content. I would be surprised if Queen Elizabeth is overly concerned that the tabloids say she's a lizard person. And also, RipoffReport is the wrong person to sue here, which is why that isn't working. If the content is libelous and you want it taken down, sue the person who wrote it, and have the judge issue a takedown order to Ripoff Report. Section 230 only protects them from civil liability, it doesn't make them immune to takedown requests.

In a funny idea, I wonder if you could upload a copyrighted image and then file a DMCA request against the page and have it delisted by Google. Their terms say you grant them a copyright, but if you upload a work that you don't own the copyright for you can't give them a copyright. Technically you're violating DMCA for the upload, and again by lying on the DMCA form you fill out (since you have to own the copyright) but as long as you pick something nobody is likely to sue you for, it should be fine (copy the credits from a book or something). Or if you want to get clever, you could have a friend make a painting of a stick figure in Paint and slap a copyright logo on it, then upload it and have your friend file the DMCA complaint. For bonus points, do it to every single page. They aren't liable because of section 230, but you could probably still force them to play a game of whack-a-mole with Google.


Why wasn't backpage.com projected by section 230?

They were, until the owners of backpage started giving advice to child traffickers about how to continue advertising.

A child trafficker would place an ad featuring an image of child sexual abuse, with wording that gave coded hints that this was a child. "Amber Alert!"

Backpage would strip out that coded language and run the ad, with the image of child sexual abuse.

Sometimes those children would, after they'd been rescued, recognise themselves in the ads and ask backpage to take the ads down. Backpage refused.

https://www.justice.gov/file/1050276/download


I think the short answer is that backpage.com possibly was protected by Section 230, but federal and state prosecutors kept on harassing them with a barrage of new lawsuits and charges until they found a court willing to say it wasn't protected, and the people running it just gave in and pled guilty in the end. With the site shut down and no end in sight they just didn't have the resources to fight it.

e(5) of Section 230 is a specific carve out that the protections don't apply to sex trafficking content. I believe they used that to argue that Backpage wasn't protected by section 230.

No, SESTA became law years after backpage was shutdown

Are anonymous trolls "information content providers"?

An easy fix is to say that an "information content provider" must be a legal person who is liable for their content. Then it's easy to find where the buck stops for a Tweet or a Rip-off Report or a Revenge Porn.


Every HN profile must include a real name, verified by legal documents? I think they do that already in China.

The problem is not that Section 230 clearly requires platforms to be neutral. The problem is that it doesn’t.

Section 230 isn't in the Bill of Rights. It's a legislative gift that was given to internet companies to help them grow by granting them a special legal shield for all the highly problematic content that they host and monetize.

If they want to editorialize on the back of that content, then I don't see why they should have such special status.

We do not need to eliminate Section 230. But the definition of 'Good Samaritan' blocking and 'good faith efforts' should explicitly not include the editorial decisions of a publisher.


> It's a legislative gift that was given to internet companies to help them grow by granting them a special legal shield for all the highly problematic content that they host and monetize.

No, it was given to them to encourage them to engage in "good-faith" censorship of things that the government doesn't like by not making such censorship move them from the relatively weak distributor-liability regime where they were only liable based on a responsibility to stop distributing content once they had notice of its unlawfulness to the any-oversight-is-on-you publisher liability regime.

It wasn't to "help them grow", which it was assumed they would do anyway, it was to encourage them to try to restrict "bad" content as they grew (it is the one surviving part of the internet censorship Communications Decency Act, and like the rest of that act was, in fact, directed at promoting internet censorship.)


Would you also advocate that we bring back the FCC's Fairness Doctrine, so that news sources would be required to present contrasting viewpoints? Arguments against that were based on the first amendment; that a free press should have complete editorial control over the content that they distribute.

I think that libel laws could be eased just a bit. As it stands, to sue a newspaper for libel you need to prove that they knew what they were reporting was false, or that they showed a "reckless disregard" for the truth. It's a very high standard, but occasionally you will see cases settle (e.g. Covington High student).

I do not think the government should be intervening on content decisions. I think;

(1) publishers & platforms should be legally responsible for content they host if they are going to editorialize on it

(2) from a 'net neutrality' standpoint, that utilities and platforms should be mostly blind and entirely blameless for the packets they carry, and

(3) we should allow some level of packet and/or content classification in the middle of #1 and #2 without making the utility/platform fully liable for the packets/content they are carrying, if that classification is based on fairly protecting the network/platform from "attack".

The only reason I can see for a fairness doctrine would be based on a theory of anti-trust. To the extent that Twitter, Facebook, Apple, Google, etc. are monopolies, their ability to censor non-obscene viewpoints on their platforms should be limited... and that's a spectrum not a binary switch.


> that news sources would be required to present contrasting viewpoints

Personally, I don't have a problem with publishers engaging in publishing. So I wouldn't support that.

But there is a large, philosophical difference, IMO, between a publisher and a platform.

And our laws need to be changed to further clarify this.


I'd love to see the Citizens United ruling overturned. It would be great for our society to not treat corporations as people. But as it stands, corporations are found to have free speech rights. Social media companies are both publishers and platforms. Many news sites now have comments sections, too -- they're both publishers and platforms. What is the large, philosophical difference that separates them?

> But as it stands, corporations are found to have free speech rights

Sure. They have those right, just like a newspaper has those rights.

But these companies and newspapers are also liable for their speech.

And the companies that are not liable for the speech, are companies like phone companies.

Phone companies are required to follow certain restrictions, and the courts have found them to be perfectly legal.

> What is the large, philosophical difference that separates them?

Take the phone network, as an example. The courts already treated the phone network, differently than they do a newspaper.


> should explicitly not include the editorial decisions of a publisher

what happens when I want to run CatTalk.com and, and its against the rules to talk about Dogs, and someone comes in talking about Dogs? Shouldnt you have the ability to run and moderate and host the content on your site that you decide?


Of course the have the ability and the right to host and moderate your own content! You just do it without Section 230 protection.

That means you have a responsibility for that content; the same legal liability that newspapers and magazines have when publishing articles and editorials.


Newspapers and magazines have a staff of editors (and sometimes lawyers!) review each issue before it's published. That's the standard you want to apply to anyone running any forum anywhere on the internet? That every comment has to be carefully prescreened for legal compliance? And even then you have to operate under a substantially increased risk of lawsuit.

Maybe you prescreen every comment but you make a bad call and something defamatory gets through. Or maybe someone sues you under bad faith and you have to pay to defend it or settle.

This is a very bad future for the internet. It's an internet where the powerful, who have the full support of publishers, will get to have their voices heard loudly. And the less powerful, who do not have teams of lawyers willing to fight on their behalf, will have a very hard time getting their voices heard.


> That every comment has to be carefully prescreened for legal compliance? And even then you have to operate under a substantially increased risk of lawsuit.

No, that’s only the standard I want to apply if they are editorializing the content that’s being posted.


So, to use the same CatTalk example from before: if you're running a site dedicated to cats, and you want to disallow user posts about dogs (or cars or pornography or...), you want that site to have to carefully prescreen every comment for legal compliance and operate under a substantially increased risk of lawsuit?

So HN moderates its content, quite strictly imho. Do you think they should be sued because of a comment you or I make on this site?

In practice this means that it would be extremely irrational to run a not-for-profit or nearly not-for-profit venue that accepted posts from anything but close friends.

Is that the world you want?


Newspapers and magazines run each article past editorial review before publishing it, and usually fact check anything that looks like it might be libelous or controversial. They don't just accept what the writer submitted and go to press.

It's hard to see how you could run a site that provided near real time broadcast communication to the general public if you had to do that level of vetting of each post to make sure nothing slipped through that might get you sued.


> You just do it without Section 230 protection.

But that's the thing. Section 230 doesnt really have anything to do with moderation. In fact, it allows websites that has user generated content to exist without moderation.

I think/hope we agree that its completely reasonable and normal for websites to moderate the content on their sites. For example, I want Twitter to take down child pornography posted. Literally hate speech - calls for genecides and violence against people - websites should have the legal ability to remove that from their sites. I do not think there should be any consequences towards the websites for wanting to remove this content.

If you remove 230 protections from websites, it forces the sites that are able to survivce to moderate more by making them legally liable for the content published.


So this might be an extreme example, but if the people running CatTalk become legally liable for the content, would they be held responsible for someone planning and carrying out a crime on their site? Because that seems like an easy law to abuse and destroy the lives of site-owners.

For example, one could post about drug deals or something in a thread that mods might not read, then hold them legally responsible for enabling drug deals on their site.


Neutral is very much in the eye of the beholder. To start considering this scenario, we'd have to ask "Who will be the arbiter of whether a site's content is 'neutral?'"

Follow the First Amendment.

The First Amendment is in conflict with the government regulating what viewpoints companies allow on their platforms.

Section 230 does that now. It gives platforms publisher immunity. They could change the law to require platforms to follow the first amendment in order to retain their publisher immunity.

They could. "Should they" is a very reasonable question.

I, for one, appreciate the current status quo where I generally don't have to deal with neo-Nazis spouting nonsense uncontested on my Facebook wall (and if that arrangement bothers me, I could go to some other site).


You can block that or contest it, as you wish. I've been called a Nazi for opposing critical race theory. Where do you draw the line? Social media companies lean left, the government should ensure they remain neutral by following the First Amendment.

> You can block that or contest it, as you wish.

I'm happy to do so. I'm also happy the platform has some base-level standards so I don't have to block and contest quite so much.

> Social media companies lean left, the government should ensure they remain neutral by following the First Amendment.

Is there anything in particular about social media, as a technology, that causes all companies engaging in it to "lean left?" If not, this is a problem the market can solve.


But the first amendment prevents the government from telling me or my company that I have to follow the first amendment like they do.

Precisely. Enforcing "neutrality" on private parties would be unconstitutional viewpoint discrimination.

Seems like a pretty easy call if the site in question blocks peoples' accounts for posting an article from the 4th largest print news publication in the USA.

It was an article that blatantly violated the posting medium's terms.

Doesn't matter if the government itself posted that article. Doxxing info is immoral and should be blocked.


It's only "a pretty easy call" when you cherry pick extremely easy, outlying examples.

What about more nuanced cases? Who will be arbiter of "neutral" then?


It's actually the 8th largest, not 4th largest.

If you tried to enforce a "neutrality" requirement, you'd slam face-first into the First Amendment. It would be government mandated viewpoint discrimination.

Not if you have a 6-3 majority on the court... 1A is whatever SCOTUS says it is.

> Not if you have a 6-3 majority on the court... 1A is whatever SCOTUS says it is.

That's true. Like much of the Constitution, the text is pithy and doesn't specify its definitions. The current legal interpretation of the First Amendment and free speech rest on a particular philosophical traditions that the court adopted in the last century, and especially after the 60s.

A lot of people have absolute faith in the functioning of a "marketplace of ideas," but it's not at all clear to me that a such a market can work well when it's flooded by disinformation, just like a market of goods can't work well when it's flooded by counterfeits.

https://www.nytimes.com/2020/10/13/magazine/free-speech.html


which is why the legal exemption should be tossed entirely, and new publishers like twitter and facebook who have demonstrated control of what gets seen on their platform, should be liable for the content on their platforms.

You're aware that this will result in _more_ content from being moderated off the sites, right?

Full devils advocate mode here.. I think it would likely result in a severe scaling back of the operations of Facebook and Twitter (and Reddit and [...]). There are simply not enough people to hire as moderators and coordinate at the scales they're running at right now to operate in a manner that won't get them sued on the regular.

I also think this would result in a mad dash for anonymous, distributed, decentralized communication methods, i.e. things that can't be the target of a subpoena.

Given the toxic influence of both social media on society, and the severe centralization we're operating under... both of those things look very tempting.


I think it would be a net negative for the internet, attacking the very thing that let it get to this place.

That would make Hacker News liable for any defamation posted in the comments section. Well done, you just killed HN.

[flagged]


Throwing out all online discussion fora is rather throwing the baby out with the bathwater. The replacement would be centralized media companies that have total and absolute control over information flow.

Goodbye Mastodon (every Mastodon instance is liable for all toots). Goodbye Internet Archive (can't host content that might be defamatory, they'll be liable). Goodbye GitHub pull requests (Git would be liable for any defamation contained in them). And so on.


I would happily see all of those things go away to make sure Silicon Valley can't control who sees what.

It's not that I don't understand the value of those things, its that I see the value of not having information in society be controlled by a few companies as having far larger value.


If you choke off user-generated content, the remaining content will be directly produced by Disney and other media conglomerates. That's not better, that's worse.

The only people able to blog, for instance, would be people who have the technical chops to completely self-host. Everyone else would be reduced to handing out flyers on the corner like the bad old days.

The behemoth old-media companies would be fine, because they can afford lawyers to go over everything they publish.

Not only would it not be worth it, repealing section 230 would consolidate behemoth media companies' control, not break it. It would do the absolute opposite of what you want.


> If they want to editorialize on the back of that content, then I don't see why they should have such special status.

They don't have special status for their editorializing. If Facebook or Twitter or any other interactive computer service produces editorial content that, say, libels someone, they could be sued over that and would not have a section 230 defense.


Would you consider "this is dangerous" or "this is misinformation" to be editorializing? What about "adding context"[1], as Jack Dorsey has promised to do at twitter?

Given that all are judgement calls, I'd say it's impossible for it to not be. There's a difference between merely removing something and giving your opinion on the contents.

[1]: https://twitter.com/jack/status/1317081843443912706


If they give their opinion on the content, they would be liable for that, because (1) it is not information provided by another information content provider, and so is out of scope for section 230, and (2) even if it were in scope they would be liable as the author--230 essentially says "go after the author, not the host" and in this case they would be sued as the author, not as the host.

Explicit deliberate bias is protected speech.

They don't even need to be "neutral", just to have written rules that apply to everyone. eg: no spam.

That rules could include biases like "no news that favor Trump reelection", but they should be in their terms of service explicitly.


I agree with this. From a libertarian stand point, the gov stepped in to help (rightfully, sometimes we need it) but it was originally written in a way that is now being abused. I don't see a problem changing it so that if you a companies gets to decide what they want to show and not show, they should be liable just like news sites.

I'm open to hearing the opposite side if anyone has any arguments on why it shouldn't change


Well they aren't news sites. They are private entities providing general platforms. Speech, per the US legal system, is more than just what one says, but also the actions one takes. These companies have not only a right, but a duty as publicly traded companies to protect their platforms as they see fit.

I fail to see how moderators on vBulletin boards in 2002 are any different then moderators/admins/algorithms on Twitter, Facebook, YouTube, etc in 2020. The scale is different, sure, but you are not entitled to the amplification of these platforms just because they are bigger the same way you weren't entitled to the amplification on those old vBulletin board systems.


So where do you stand on 230? Are private entities responsible for the content they host? If we repeal 230 and someone posts something falsely defaming me, who do I sue?

> So where do you stand on 230?

I generally support 230. There may be some tweaks that could be made that I would support, but the general concept behind 230 is correct.

> Are private entities responsible for the content they host?

Generally no. Private entities are responsible for the content that they publish, not for the content that they host. An example being the comment section on a news site versus the news article on a news site. The latter is content that they published, the former is content that they host.

> If we repeal 230 and someone posts something falsely defaming me, who do I sue?

IANAL but I would imagine if 230 is repealed and some one libels you, you sue the platform and the content creator.

If 230 isn't repealed you sue the content creator and ask for an injunction against the platform to remove the content.


So if the government is stepping in here and moving the liability from the platform to the content creator as a favor to the company, shouldn't the company be required to not be allowed to selectively allow posts (assuming it is legal content and related). I.e. why cant The New York Times call themselves a platform and just post whatever they want? Where would you draw the line?

> So if the government is stepping in here and moving the liability from the platform to the content creator as a favor to the company, shouldn't the company be required to not be allowed to selectively allow posts (assuming it is legal content and related).

That's what the law was before 230, except that it essentially applied to illegal content, as well, because of the company tried to actively moderate even for just illegal content then it became liable for all content (it had to be 100% right if it tried to moderate illegal content.)

Being liable for only known illegal content is distributor, not publisher, liability, which is what it superficially looks like 230 does on its face (courts have applied 230 to provide no liability, not even distributor liability; expressly imposing distributor-style liability would be a modest reform.)


> I.e. why cant The New York Times call themselves a platform and just post whatever they want?

Because the NYTimes pays staff to create it's content and exercises complete editorial control over said content, and thus is fully liable for the content published as news articles on its site. The NYTimes however is not liable for the content in the comments section on said news articles, though, they are allowed to remove comments that they do not wish to appear on their commenting platform.

Twitter is also liable for content that it publishes, eg: what the Twitter support account posts. But it is not liable for the content that I, dlp211, post to Twitter. Twitter still retains the right to moderate their platform as they see fit.

This applies to not just Twitter, but to every platform on the internet, from the Twitters and Facebooks and YouTubes to the MyNicheVBulletinBoard to Palor, 8Chan, and 4Chan.

Now I have mentioned previously that I am open to tweaks. While I haven't thought deeply about it, I would be open to considering a tweak along the lines that the platform becomes a publisher when it promotes content via human or computer decision making. This may or may not be a good idea after I think about it more deeply and discuss it with others, but the point is that I am not of the opinion that 230 is the end all be all but I am also of the opinion that I would rather live in a world with 230 than without it. I say that as someone that believes that their politics would be greatly benefitted by the repeal of 230.


> I.e. why cant The New York Times call themselves a platform and just post whatever they want? Where would you draw the line?

IANL even if they called themselves a platform they would still be acting as a publisher and the law would treat them as such. For example they would be editing and curating content, paying for content from writers, and making the material available under their own name.


Can you help me understand why selectively removing content from a privately owned website is considered "abuse"?

So if the company hosts false information against me (that a user posted), I can sue them?

I think you can sue the individual, based on my reading of Section 230, but I'm no lawyer.

I don't see how that explains why it's "abuse" for a company to selectively remove content from their website, however.


From a libertarian standpoint, the government regulating "neutrality" is a nightmare.

Should Hacker News be required to treat all links the same? If not, exactly how do you think a government "neutrality" mandate would work?

Because such a mandate means that if the HN moderators are a little biased, the whole shebang would become liable for any defamatory comments posted here. That's a government-mandated sword of Damocles hanging over every single moderation decision made here.


If you falsely slander me, doesn't 230 make it so that Hacker News can't get sued? If the government is protecting the company here, then the government should be able to also enforce rules that come with that benefit right?

Maybe, but they can't condition a benefit on "neutrality." Conditioning makes it viewpoint discrimination, which is unconstitutional.

From a libertarian standpoint forced "neutrality" (scare quotes intended) is an infringement on the property and free speech rights of the platforms. You are not entitled to use my property against my will to endorse viewpoints that I disagree with, and the government should not force me to comply.

So in that case are you against 230? If someone falsely defames me, who do I sue?

> If someone falsely defames me, who do I sue?

You would sue the person who defamed you today.


I'm having a moment here where I am considering this from another viewpoint. While I hate the idea of siding with Ted Cruz on literally anything I want to take a moment to really consider something.

Section 230 makes sites like Facebook, Twitter, Reddit, Youtube, Instagram and even Hacker News possible. Revoking Section 230 could expose those platforms to the possibility of liability for content posted on them. This might cause a re-shaping of the Internet in general.

Part of me seriously wonders - would that necessarily be a bad thing? I am not convinced, by any stretch of the imagination, that these social media opinion aggregation platforms are universally positive. Everyone keeps acting like the existence of Facebook somehow democratizes content publishing for the masses, even when we are faced with clear evidence that this isn't the case. The centralized nature of Facebook actually allows for larger scale manipulation of the narrative.

And how would this affect Uber, Airbnb, Amazon, Netflix and other sites? I suppose opening them up to liability for negative reviews could be a problem.

I'm thinking on the fly here, but if Facebook just disappeared off of the Internet tomorrow - I'm not really sure I would mourn that. And if new Internet companies were burdened with stricter moderation requirements (or the need to stand behind every piece of content posted onto their site), maybe that would actually be good? Maybe that would drive people to create their own websites once again.

I'm sure I haven't thought deeply enough on this but I definitely feel the tide here is a knee-jerk protection of Section 230. Yet the companies it protects the most are the ones I feel are the worst.


Remember that Section 230 isn't just about the big, centralized services. It protects everybody who's ever run a phpBB forum for their little community, or allowed comments on their blog, or set up a Mastodon instance they share with others. If you host content written by someone else, Section 230 is what allows you to moderate it, and what prevents you from being legally liable for it. Proposals requiring "political neutrality" even add new burdens to be compliant with; your friend on your Mastodon ranting about "that politician" probably isn't "neutral" publishing.

Facebook et al. could be big enough to wrangle the regulatory burden of existing without these protections. But many proposed 230 "reforms" could scare off anyone smaller, creating a regulatory moat that keeps Facebook at the top in perpetuity.


Yes, believe it or not I am questioning whether or not I should be free from liability if I host a public forum. And my conclusion is: perhaps I should be liable if I stand up a public forum.

You may suggest that would prevent me from creating a user-generated-content application of massive scale without the resources to sufficiently moderate it. And again, yes - maybe that should be a requirement of me doing such a thing.

It isn't like the only kind of business a person could create on the internet is one that surrounds the aggregation of user generated content. If it killed that entire class of business ... I am not sure we would lose very much of value that couldn't be replaced by individuals hosting their own content.


> of massive scale

Of even tiny, microscopic scale. Even your personal blog's comments, or the forum for your local club.

And it _still_ won't stop Facebook. It'll only stop _you_. That outcome doesn't sound like the outcome you're saying you may be comfortable with. It sounds like the opposite.


> That outcome doesn't sound like the outcome you're saying you may be comfortable with.

Oh no, I'm considering exactly that. I am saying: if my tiny personal blog has a comment section I would be liable for comments posted there. If that is a burden I can't handle then I should turn off comments. At least in my experience those comment sections are a complete waste of space anyway and the trend I've noticed from the large blogs that are still around (e.g. daringfireball, kottke) is that their comment sections are long gone anyway.

What I am pondering is: would this ruin the Internet? If I couldn't host a public forum if it got beyond my limited means to moderate? If I couldn't have a public comment section? It doesn't seem clear to me the Internet breaks if I am forced to own the responsibility for those things that I allow to be made public through sites I control.


> What I am pondering is: would this ruin the Internet?

It would break most of how the internet works today. It would prevent most forms of real time, one to many, communication since a human would first have to moderate it.

Best example is public audio/video conferencing. I watched a conference presentation in real time today, which had real time comments in IRC and an audio/video conference question and answer. Neither of which would have been possible if real time moderation was required.

How would moderation even work for audio/video conferences? As far as I can tell it would not work since no moderation could happen in real time and allow for smooth audio/video conferencing. What if we act as a platform though and claim no responsibility for the content that happens? Then there is no ability to set topics or restrict offensive material etc so any random person(or bot since how would you tell the difference) could streaming in offensive videos or loud noise

Loosing most, one to many, real time communication would be throwing out a considerable amount of value.

Do you have solutions for issues like that?

edit clarifying that I am taling about one to many real time communication.


> How would moderation even work for audio/video conferences?

This is already an issue on platforms like Twitch. All Twitch streamers are required by the Twitch ToS to moderate their chats and the streamers face bans if they fail to do so.

Everyone keeps talking about "breaking" the Internet but let's consider what would actually change. Let's say that I am unable to moderate my chat because it is overrun with malicious actors. What are my options? I can completely turn off chat for one. I could restrict chat to a manageable vetted subset of chatters that I am comfortable allowing to post with minimal moderation.

In fact, as a streamer I cannot possibly read hundreds or thousands of messages per second. At that point the very idea there is "real time communication" going on is a myth anyway. Every streamer has a way of limiting this deluge of input and moderation is how they are handling it.

> Loosing most, one to many, real time communication would be throwing out a considerable amount of value.

This is where I feel everyone here is taking things too far. You don't lose communication - you become responsible for the communication you allow to be made pubic.


> This is already an issue on platforms like Twitch. All Twitch streamers are required by the Twitch ToS to moderate their chats and the streamers face bans if they fail to do so.

Currently twitch is not held legally responsible if the streamers fail to follow ToS due to Section 230. If Section 230 was dropped however asking the streamers to moderate would not protect them from legal consequences.

I do not see how twitch's current rules would provide them legal protection or provide a solution for moderation of one to many realtime communication if Section 230 was dropped.

Maybe you have some assumptions about how new laws would be put in place post Section 230 that make this work?

> This is where I feel everyone here is taking things too far. You don't lose communication - you become responsible for the communication you allow to be made pubic.

It increases the barrier to entry so many, if not most, forms of one to many real time communication would have to be discontinued due to lack of the ability to moderate in real time. Hence they would be lost along with the value they generate.


> Maybe you have some assumptions about how new laws would be put in place post Section 230 that make this work?

I don't have to have assumptions, I know of real life examples. Not long ago I worked for a company that had a 24 hour live news broadcast. How do you think they handle this?

One specific example I can recall is closed captioning. These are federally required on all TV broadcast channels, including live. The company had a contract for a third party to manually transcribe the broadcast in real time. One initiative we wanted to explore was automating this process using speech recognition software. This was difficult because it turns out that incorrectly transcribed closed captions can lead to lawsuits. So the company that was contracted to handle the closed captions also provided insurance/indemnification against their service causing any lawsuits. No AI speech recognition solutions that were available also included this insurance so it was deemed too risky to switch.

One of the underlying assumptions in all of this Section 230 talk is "it's too hard to moderate the deluge of user generated content" .... so why even bother trying I guess? Why is that the underlying assumption? Why isn't the assumption: You can publish as much user generated content as you are capable of adequately moderating? The idea that the current free-for-all is some inherent "right" is perplexing to me.

> so many, if not most, forms of one to many real time communication would have to be discontinued due to lack of the ability to moderate in real time. Hence they would be lost along with the value they generate.

Again, that doesn't seem like the necessary conclusion. It prevents centralized platforms from publishing massive deluges of unvetted content. They are not in a position to moderate the billions of videos, chat posts, images uploaded each day. So, maybe instead of leaving the free pass we've given them open we question: is it reasonable for single entities to be the sole publisher of billions of unmoderated pieces of content? That doesn't mean all content everywhere goes away. It creates limits on what sole entities can accomplish.


> I don't have to have assumptions, I know of real life examples. Not long ago I worked for a company that had a 24 hour live news broadcast. How do you think they handle this?

You example does not address my original question in: https://news.ycombinator.com/item?id=24807064

"How would moderation even work for audio/video conferences?" "Do you have solutions for issues like that?" In your example there must be a delay in broadcasting, for the realtime captioning to happen in. That sort of delay precludes a real time back and forth conversation that is expected from audio/video conferencing.

Secondly I was also asking for a solution that would allow small time actors, like in my example of the conference I attended, to continue to be able to host such conferences with out undue liability or massive investment.

> One of the underlying assumptions in all of this Section 230 talk is "it's too hard to moderate the deluge of user generated content" .... so why even bother trying I guess?

I have not seen that be the main thrust of any argument. To my knowledge Section 230 was in part made to make it easier for more moderation to take place since before some had assumed any level of moderation would induce legal liability, both criminal and finical, for any content that made it past moderation.

> Again, that doesn't seem like the necessary conclusion. It prevents centralized platforms from publishing massive deluges of unvetted content.

As noted above your example above does not apply to my original question an example of a small time actor using audio/video conferencing so I remain unconvinced.


> You example does not address my original question in

We'll have to agree to disagree then. For example, I am taking an online course right now. There is a chat room where people post questions and there is a moderator that elects individuals to speak. There is a private forum where people can post questions. I absolutely expect them to moderate that content to avoid any slanderous statements.

What happens now if someone call into a radio station live call-in program and starts spouting nonsense? Same thing that would happen on the Internet. Let's sketch it out. I don't have money/time to do any moderation at all (or pre-vetting of on air guests) -- Well then I better not do it at all or I better be responsible for the consequences if a nut job gets on. I cut him off ASAP and do my best and what happens? Does a swat team descend and smash down my door? No, maybe I get a cease and desist and nothing more happens, maybe I get filed against in court, maybe I have to defend myself. Maybe I pay for liability insurance (as I mentioned happens in many other similar circumstances). Maybe I am legally compelled to remove from my servers any recorded content related to the incident. Maybe I have to pay damages for the time it was up.

This idea that private real-time conferences like online classes will be impossible in such a circumstance is the hyperbole I am starting to detest. Would there be changes: hopefully yes. And my belief is theses changes would positively effect all online discourse.


Neither example seem to fully apply to my example, for example you did not layout how the online class is viable post Section 230 and the second example does seems to apply to small time actors like the small, volunteer(clarification), conference in my example. If you want to focus on one and elaborate I am willing to follow up.

> This idea that private real-time conferences like online classes will be impossible in such a circumstance is the hyperbole

I have not used the word impossible or implied it. I have said the change would throw out "a considerable amount of value" and I have talked about value generating venues having to shut down.

I do not think you are understanding what I am saying pull 'impossible' from what I have said.


You said: "so many, if not most, forms of one to many real time communication would have to be discontinued"

I take your statement: "would have to be discontinued" as equivalent to impossible. If you'd like to walk back that statement we can continue to discuss.

I have given examples of: Small teams of individuals (e.g. one Twitch streamer + a mod team) broadcasting to tens of thousands of people in real-time including call-in type segments. This happens today and I considered how this would continue if we repeal section 230 (including comparisons to existing public real-time radio call-in segments). I have given examples of medium sized teams presenting classes of 150+ individuals, each of whom can "raise their hand" and be selected by a moderator to take control of the stream to provide their own insights to the group in real time. I considered how this would be possible without Section 230. If neither of those forms of "one to many real time communications" fit your imagination then please be more specific and we can continue.

However, in all of this I still have not heard to my own satisfaction a description of a realistic loss we would have if Section 230 were to be revoked.


> What I am pondering is: would this ruin the Internet? If I couldn't host a public forum if it got beyond my limited means to moderate?

Do you actually have the means to moderate your forum and fund the legal trouble you might face should you get it wrong? Are you ready for the bad actors who abuse this bottleneck to take down content they don't like? I don't think many people are positioned to handle these burdens, and I think the internet will be ruined as a result.


>Section 230 makes sites like Facebook, Twitter, Reddit, Youtube, Instagram and even Hacker News possible. Revoking Section 230 could expose those platforms to the possibility of liability for content posted on them. This might cause a re-shaping of the Internet in general.

>Part of me seriously wonders - would that necessarily be a bad thing? I am not convinced, by any stretch of the imagination, that these social media opinion aggregation platforms are universally positive.

Here's where you are wrong to think this. It doesn't protect social media giants, Hacker News and news websites. It protects literally everyone on the internet in the USA.

Thanks to this law you can't be held liable if you have a blog with a comment section. Anyone can post anything there and you could be at serious risk of legal trouble if someone posted something that breaks the law on your website. Any communal website would either a) move out of the US and b) probably require some very strict controls on who can post and what.

The law doesn't protect American giants, it protects everyone that uses the internet to discuss.

I suggest you read this blog post on the issue, especially this part:

""If you said "Section 230 is a massive gift to big tech!"

Once again, I must inform you that you are very, very wrong. There is nothing in Section 230 that applies solely to big tech. Indeed, it applies to every website on the internet and every user of those websites.""

https://www.techdirt.com/articles/20200531/23325444617/hello...


> Thanks to this law you can't be held liable if you have a blog with a comment section.

Even absent a comment section, most blogs are hosted by someone else (e.g. WordPress), who does not vet the content on that blog. That won't be possible anymore in a post-Section 230 world.


Right. So WordPress has no protections if you post bad content. So probably WordPress goes away.

No problem, you say, I'll self-host my content on AWS and use CloudFlare as a CDN. But AWS and CF also have no section 230 protections any more either and probably won't do business with you unless you indemnify under some kind of insurance policy (which you won't be able to afford as a small blog).

Even if you run your own server and install it at home, the lack of section 230 protections will probably make your ISP responsible for content you publish (remember: your ISP is not a common carrier - thanks FCC) so you're probably going to find that all consumer ISPs are going to have terms of service that prohibit publication, and technical implementations that enforce that.

I mean, in the non-Internet world, if I want to make a newsletter and mail it out to a subscriber list; I can at least do that. All I need is a laser printer, and a stack of stamped envelopes. The postal service is a common carrier, at least.

Today's internet is a composition of platforms, all of which are really only possible due to the existence of section 230. It blows my mind that people are so blasé about the idea of tossing it out or reworking it in a naive way.


Thank you for your final comment. As a lawyer, reading people's reactions to 230 repeal attempts (especially when it comes from educated engineers and technologists) is shocking. What is more astounding is that, for some reason, they seem convinced that this will take out FB/Google/Twitter, etc. where we have ample evidence that the opposite will occur, and they'll just become further entrenched platforms since they are the only ones that would have the financial capacity to continue running community based platforms effectively. I have yet to come across a strong argument for repeal that doesn't exacerbate the issues that repeal proponents purport to want to address.

> It protects literally everyone on the internet in the USA.

I think this is the kind of knee-jerk hyperbole I want people to really think deeply about.

> Thanks to this law you can't be held liable if you have a blog with a comment section.

Maybe that should not be protected. If I am unable/unwilling to moderate the comment section of a blog I host then maybe I should't have one. I do not believe a completely open free-speech comment section is a requirement of a good or successful blog. Also, there is a business opportunity for those who want comment sections to pay for moderation services.

> I must inform you that you are very, very wrong.

This kind of patronizing is neither useful nor conducive to mature discussion. Aggregate user-generated-content sites aren't some kind of holy thing, forums, comment sections or otherwise. I want people to consider the fact that we may all be fine without them at all. Nothing stops people from posting whatever content they want on their own site. It just discourages aggregation of other people's content.


>Maybe that should not be protected. If I am unable/unwilling to moderate the comment section of a blog I host then maybe I should't have one. I do not believe a completely open free-speech comment section is a requirement of a good or successful blog.

It's not about having to moderate a completely free speech comment section. It's about being able to even have one and to be able to host good people and good comments without having to be liable for the time when people act terribly.

Where I'm from I don't think a bar owner can be held liable if a patron starts a fight and wounds another patron. You don't open a bar with the explicit intent of it being an amateur boxing arena, but a nice place for people to enjoy drinks and conversation.

If a user online decides to breach trust and common courtesy by posting vile stuff on my site I shouldn't be held liable for their actions as I did not force or coerce them to do it.

>Nothing stops people from posting whatever content they want on their own site. It just discourages aggregation of other people's content.

Yes, nothing is stopping them. But 230 is allowing you to do more with the internet other than just post things on your own site. That blog even describes an instance where replying/forwarding an email - in which you are repeating what rule/law breaking thing another person said to be able to reply/comment on said thing - you are protected against liability.

If you dismiss these liability rules you are effectively removing everything from chatrooms to comment sections from the internet, effectively making the internet into a snailmail/bulletin board service.


> If you dismiss these liability rules you are effectively removing everything from chatrooms to comment sections from the internet, effectively making the internet into a snailmail/bulletin board service.

Yes, I am pondering exactly this. How much of the Internet do I consider valuable that would be lost? I mean, lost in the sense it would be completely irreplaceable without Section 230 protection. To be fair to my position, the vacuum of unmoderated spaces would be filled one way or another.

Maybe people are surprised by such a position, but I don't actually see enough inherent value in unmoderated comment sections on private blogs or even all of the 1990's era phpbb forums to worry about their loss. In fact, when I consider the negative effects of the massive companies hiding behind Section 230 they really seem to heavily outweigh whatever positive effect comments on my personal blog could ever bring.

It seems these things keep coming up: public comments on personal blogs and anonymous forums. I want people to deeply think about whether or not these things are really valuable and even more so whether or not the are irreplaceable without Section 230.


> If I am unable/unwilling to moderate the comment section of a blog I host then maybe I should't have one.

I completely agree, it’s clear that a successful business model around content moderation cannot exist in the internet’s current form, and there are very few consequences for people who post malicious content.


Would the forums of yesteryear survive? The early 2000s saw a veritable utopia of information sharing, as people with special interests came online and formed groups to discuss and collaborate on those interests.

Would the open source software community survive? My world revolves around github more than any other site; would ticket discussions be forced back to mailing lists? Would mail readers pick that slack up, and re-implement social media over email?


> Would the forums of yesteryear survive?

I know this is questioning some fundamental assumptions of most of our moral principals, but I am literally questioning: should they survive? It seems everyone is assuming that they should. Some seem to suggest they should for abstract "free speech at all costs" philosophy. Others are assuming they have some kind of positive effect, either on personal growth or economic activity.

> Would the open source software community survive?

I'm not sure how Section 230 applies to code but I don't think public forums like Facebook, Twitter or Youtube are necessary for open source software to continue (any more than they were necessary for it to start, which happened long before they existed).

Besides, what I'm pondering is the centralized aggregation points. People could still host blogs, they are just directly liable for the content they post (as they likely are now).


You keep harping on Facebook et al but you're ignoring everyone pointing out that Section 230 protects everyone at all scales. Without Section 230 exceptions there's no protection for any type of user generated content.

That means no user reviews of anything. No user contributed information so no more Wikipedia or OpenStreetMaps. No Wikis of any kind in fact. No hosting of public data sets for ML research. Forums would be a liability nightmare so they would go away.

Not only would current/new instances of user generated content make sites liable but hosting any historical co rent as well. So to avoid liability the web would have to be scraped clean of user generated content.

Facebook and other large sites would be the only ones to survive because they could afford a moderator army. Your extremely short sighted position would basically leave only large content producers. The Internet would regress to the curated Online Service model but worse because user communication would need to be disallowed over heavily moderated.

You're advocating for the Internet to turn into broadcast television. It's sad that you either can't or won't accept that implication.


Why do you suppose the only way user content can possibly be shared on the Internet is through content aggregation sites? I am suggesting this is a blindspot we've all fallen into that may have created the monsters we are now fighting to protect.

> Your extremely short sighted position

I am getting this a lot in this comment section so far. I mean, hey we should all be thick-skinned, right? My disagreement with your conclusions or predictions of what would happen doesn't mean I suffer from myopia and it isn't very polite to suggest otherwise.

Central aggregation of user generated content isn't the only possible mechanism to do anything on the Internet. Removing the legal protections for aggregators may slow growth and make it significantly more difficult to centrally aggregate content, but that might actually be a good thing.


I'm sorry but you are suffering from serious myopia by getting hung up on big sites like Facebook or YouTube. Removing protections for sites hosting content will have a deleterious effect on the entire Internet.

As I said and you conveniently ignore, user generated content would have to be scrubbed from the Internet except from the big players like Facebook that can afford an army of moderators. So first order problem is big sites like Facebook would be the only venues of user generated content.

It also creates a slippery slope for ISPs/hosting companies. It could easily be interpreted that an ISP or hosting company is hosting user generated content, they literally are, so they're liable for that content. They would not carry any content that might make them liable for anything so they'll either shut down or moderate their services such individuals have no means to post their own content. ISPs already restrict or block hosting content, they would only get more draconian if they faved legal liability for someone hosting a site.

You can't suggest a course of action that would consolidate power in big sites like Facebook and then opine about people posting content outside of those sites. There would no longer be an "outside" of Facebook because no small player could ever afford the moderation or insurance against liability.

You might dislike Facebook or YouTube and they might be filled with dreck. They might need their own types of regulation but dooming all user generated content because Facebook's management are assholes is not a fix.

Instead of getting offended when people point out your myopia maybe take a step back and apply some critical thought to your suggestions.


> As I said and you conveniently ignore, user generated content would have to be scrubbed from the Internet except from the big players like Facebook

I've repeated it ad nauseam but I'll repeat it again - this is just hyperbole.

In what capacity do the majority of Internet users aggregate the content of others? I'm thinking about my own use of the Internet. In what capacity do I own domains or applications where I personally publish the content generated by others?

Let's consider a few possible cases (which do not apply to me). I have a blog with a comment section. Someone posts some comment that leaves me open to legal liability. I have a few options including turning off comments on my blog, moderating all comments on my blog before they go live, paying a third party to handle moderation (and indemnify/insure me against legal liability).

Second, I am passionate about some hobby and wish to create a public forum on a domain I host that allows for the discussion. Malicious participants start to show up that start to post content on this unmoderated forum that open me up to legal liability. Again, I have to deal with this now in some capacity.

Third, I am an entrepreneur with my sights on a startup. This startup is like an Instagram, Pinterest, Reddit, Tumbler/Blogger, Medium, Quora, Yahoo Answers, Stack Overflow etc. I am concerned that malicious actors will use my new venture to publish malicious content. Right now, I don't care I just build it without any worry. Without Section 230 I have to seriously think about how I ensure content is moderated.

I'm just not seeing the Internet break in any of these scenarios. I'm not seeing the Internet being scrubbed of content. In fact, as far as I can tell all of the above happens already to some degree. Do you expect defamatory content on stack overflow? Or do you expect it to be removed?


> paying a third party to handle moderation (and indemnify/insure me against legal liability).

For any criminal material that makes it through the 3rd party moderation the publisher will be criminal liable. The third party can pay for the publishers lawyer to defend the publisher in court, but if the publisher is found guilty they will be the one in jail not the third party moderation service.

Ensuring the above does not happen is in part why Section 230 was put in place, at least to my understanding.

Your argument here does not seem to consider criminal liability does that change your outlook or was it already incorporated in your viewpoint somehow?


You don't see the Internet breaking in your scenarios yet in all three the extra liability on the part of the host completely changes the calculus of even bothering with the endeavor. You may not want to start a Quora competitor but someone else likely does. The needless extra liability either breaks the business model entirely or makes it too expensive for a new player to break in. So the extra liability just locks in the big players.

You're also far too focused on sites you seem to dislike. Making hosts liable for user content will also affect every single industry forum, mailing list, or chat system. A flame war on a Linux distros mailing list could very easily get that distro sued out of existence just defending itself. Even an innocent error on a wiki could open up a bunch of volunteers to legal liability.

The core problem that you're ignoring or not seeing is user content doesn't need to be libelous or illegal to end up in court. There's legal trolls that sue people for stupid or frivolous shit all the time. Simply defending yourself costs money which is something a Linux distro or a fan maintained wiki don't tend to have in abundance. It's hard enough to sue some mailing list member or wiki contributor that it tends to only happen with legitimate issues. But if the bar is lowered that hosts become liable for user content the legal trolls will descend. It's not just the legal trolls with civil suits, there will be plenty of DAs and AGs looking for easy wins (to score political points) that will go after sites for stupid reasons.

Sites that can't afford some moderation service or liability insurance will just avoid user generated content. They'll also remove existing content, you know - scrub it from their site, because there's no way of knowing it won't attract a lawsuit. You assume defamatory content gets tagged by the poster as "#defamatory" and it's then immediately obvious.

Stack Overflow might take down an obvious troll post but what about a post pointing out a bug in a software product? If some developer became sufficiently upset they could sue SO because someone pointed out a major bug in their software. A completely above board discussion of a bug could easily be seen by the developer as defamatory. Just responding to a suit would cost SO money let alone actually defending themselves. There's a wide gulf between dealing with obviously offensive/malicious content and being under constant threat of legal action no matter how good your moderation.

If you don't publish anyone else's content you're fine. I don't give a shit about you. Maybe your personally published content is worthwhile, maybe it's complete shit. I don't know or care. But I do know there's some YouTube channels I really enjoy that wouldn't exist without YouTube as a platform. I have also benefited a great deal from Wikipedia among several other wikis that would not be able to operate if they were constantly threatened in court. I've definitely benefitted from product reviews, restaurant reviews, and OpenStreetMap contributions. All of that content I know has been worthwhile and I would much rather it exist and the platforms that enable it to exist.


I don't think we're going to find common understanding on this but I appreciate you taking the time to at least think about the actual consequences. I happen to disagree with your assessments.

> The needless extra liability either breaks the business model

We can agree, I hope, that if I had a shared image host then I should at least pay the cost to ensure no child pornography, snuff or whatever other common horrible images we can agree on are removed. I hope we can both agree that such expense might be impossible for some business models but that such expense isn't needless. We can probably also agree that Section 230 likely won't protect me from 3 letter government agencies insisting I remove classified content, even if my own morals would allow such content.

So yes, I'm asking for /extra/ liability but we can disagree on what is or is not needless.

> You're also far too focused on sites you seem to dislike

It may seem that way since I am arguing that Section 230 may be the seed from which they grew. I'm arguing that allowing single entities to re-publish volumes of content beyond their means to moderate may be bad at it's core. Perhaps we should limit everyones ability to post unlimited and unmoderated content. That includes me. So I can't just put a blank billboard in front of my house, allow anyone to write any slanderous thing on it and then shrug and say "Section 230" when the neighbors complain.

On the topic of Stack Overflow, I wonder if they have taken down clearly false and libelous claims. Same with Wikipedia. I doubt either have a clean record either way.

> But I do know there's some YouTube channels I really enjoy that wouldn't exist without YouTube as a platform.

I want to take the time to descend into my own hyperbole just for rhetorical effect. Lots of the world was made better in small ways by tremendously horrific practices. I love Youtube and I watch it every single day. Careers have been born on it and a small number of millionaires. Does that mean that a single company controlling something like 90% of the personally created videos on the Internet is a good thing? For every Youtuber you like, how many of sufficient value have been buried by Youtube's algorithm?

Have you ever read the "Wikipedia has cancer" post [1]? When you really look deeply at what we think we are protecting ... are you sure it is what you think it is?

I feel like I'm taking crazy pills as the nerds of the world seem to be cheering on the bullies as they steal, repackage and profit off of the user generated content of others. And when someone suggests that as a price they should at least be held responsible for the worst of the content they republish then everyone acts shocked, like how can these billionaires possibly manage all that.

What I'm saying is: if they can't manage it then they should stop. And if you can't figure out how to do it then you shouldn't even start.

1. https://en.wikipedia.org/wiki/User:Guy_Macon/Wikipedia_has_C...


> Why do you suppose the only way user content can possibly be shared on the Internet is through content aggregation sites?

Do you have an alternative that could survive if user-generated content is made legally risky to host and is subject to the whims of moderators on the few sites/publishers that can afford to bear those risks?


> I am literally questioning: should they survive?

I say yes. Those forums of yesteryear were really good examples of where the internet can shine. The evolution of spammers made self-hosting prohibitively difficult, and the big fish grew fast and swallowed the market. It's a shame, and that's a reason that I'm generally in favor of breaking up the biggest players (though, I haven't seen a specific proposal that I'm in favor of)

> > Would the open source software community survive?

> I'm not sure how Section 230 applies to code but I don't think public forums like...

You seemed to miss the thrust of my comment about github. Github isn't just a repository of code, it's also a public forum! The ability to file and discuss bugs out in the open is a feature that would be sorely missed -- I've gotten two bug reports this week from previously-unknown users. That wasn't common back in the mailing list days, and I'm really happy that the bar to bug reporting is lower.


Well, to be fair you have an even stronger argument against me in that code is really just text. I mean, I can create `libellous_rant.txt` and push it into a public Github repo. Since any file on a pubic github repo is viewable (even without setting it up as a site or whatever feature they have) that feels like they could be viewed as publishing that content. So Github couldn't even really force moderation of issues, they would have to moderate every single file in every single repo.

But to address the specific concern you brought up, removing Section 230 wouldn't prevent someone submitting bugs/issues. It would just force the moderation of those posts before they were made publicly viewable. For small projects that receive 2 or 3 bug reports a week I doubt that would be the massive issue everyone here is wringing their hands over. It becomes a problem with scale - like 1000+ issues per day on a project ran by a single developer. But to be fair to my position - could such a developer even deal with that volume of issues even if the default behaviour was to make all posts public?

I grant that moderation slows discussion. For example, if you were asleep and someone posted an issue then before you even had a chance to moderate some other non-admin user might answer the question. Maybe we lose that. More likely we find a way to work around it.


Your position seems to fundamentally ignore abuse potential by malicious actors. You don't like a particular project? Start spamming child porn and other problematic posts until the moderators just decide they're spending too much time moderating that repo and need to shut it down, even if the developer has done nothing wrong.

I get your philosophical stance, and in some ways, I think it's healthy to question the fundamental assumptions about whether certain services should exist. But I also think you are willfully ignoring second and third order effects to continue this thought experiment where people are routinely showing that these secondary and* tertiary effects will be crippling to more than just social media companies. And again, social media companies will not go away with 230 repeal; they are some of the only entities with enough capital to handle increased litigation costs as a result of 230 repeal.


> Start spamming child porn and other problematic posts until the moderators just decide they're spending too much time moderating that repo and need to shut it down, even if the developer has done nothing wrong.

How is that different from right now? Changing the default visibility of posted content from public to private doesn't change the ability of anyone to spam anything. You either have to clear out your moderation queue OR clear out your publicly visible forum after the fact. And isn't that better? By forcing moderation you are preventing that horrific content from being visible while you were asleep overnight. Right now the forum owner can just shrug it off "oops, I was asleep, not my problem". Without Section 230 that might not be permissible.

> you are willfully ignoring second and third order effects to continue this thought experiment where people are routinely showing that these secondary and* tertiary effects will be crippling to more than just social media companies.

In what ways has anyone show any secondary or tertiary effect that would be crippling? I would avoid allowing public comments on my blog? I would avoid creating a public discussion forum? I fail to see this as crippling.

I think people are vastly overestimating the impact of re-publishing the content of others on their personal lives. Yes, it could cripple some potential businesses. But my point is: should those businesses exist? Let's really think about exactly what we are giving up, not this fear mongering "destroys the Internet". Let's be specific. What would we lose that cannot be replaced?


Sorry for the late reply. On how it is different - some of the 230 proposals want to change the good faith prong such that it would be much easier to allege that a Company was "informed" about certain activity and didn't take it down swiftly enough. Right now, even with recent changes, 230 gives some of these platforms reasonable leeway to avoid liability for this attack absent willfully ignoring notice.

As for second and third order effects, again, we have evidence from the past year of what will occur. The crackdown on personal ads, Tumblr, etc. all came as a result of changes to this law through FOSTA-SESTA. Those sites weren't all in violation of the law; they just made a determination that they cannot reasonably risk having to litigate what were previously not edge cases with the budgets they have.

The result? A lot of Tumblr and other traffic ended up on...Twitter. And why? Because Twitter, unlike other smaller entities, can weather litigation and regulatory costs much better relative to smaller competitors. Far from crippling the most egregious actors, it actually EMPOWERED them.

What you are talking about are just first order effects when you pontificate about people not just allowing things on a social media site. But as illustrated above, this will impact far more than just someone allowing comments or not. It can fundamentally reorient dynamics for all sites that allow for user content to be posted, and there are very strong likelihoods that it will lead to a greater concentration of power in incumbent social media companies, exacerbating the very issues you are most concerned about.


Okay, let's focus on the repo, then.

Section 230 prevents github from being sued over a user posting a malicious PR to my repo. Successful moderation, in that case, would require github to employ people who are familiar enough with my code to understand the impact of the PR. This is completely untenable.


I promise you that there are options between "do nothing" and "remove the legal foundation crucial to the development of the participatory internet."

> Yet the companies it protects the most are the ones I feel are the worst.

Assessing legal risk of user-generated content is a financial barrier that only the companies you feel are the worst will be able to overcome. We're discussing law in the comments here like we're all lawyers but let's face it--very few of us are up to the task of determining what is and is not illegal, and even fewer of us could actually survive if that assessment was challenged. User generated content ends up having a massive upfront legal cost, and I predict it will become extinct (both future and retroactive) for US-based sites if Section 230 is repealed...

> The centralized nature of Facebook actually allows for larger scale manipulation of the narrative.

...except on sites like Facebook, who make unimaginable amounts of money and can likely afford to fund private development of automoderation software and can weather the storm of lawsuits for content that manages to evade the filter. Facebook will only become more centralized as other online communication platforms are unable to bear the costs of publishing user generated content, and their control over the narrative will increase.

> Maybe that would drive people to create their own websites once again.

Sure, but how are people going to find these websites if I'm effectively reliant on the tech giants to tell people about it? Do you trust Facebook to not start censoring links to external websites? If there's no Section 230, then they could easily justify censoring off-site linking by saying they can't moderate the content of uncontrolled sites. How is Google going to exist if it's liable for what it links to? How are content aggregators going to exist? Forums? Chat rooms?

> Facebook just disappeared off of the Internet tomorrow - I'm not really sure I would mourn that

Same, but Facebook's not going anywhere. It'll just start charging its billions of users directly and continue telling me what I can and cannot read according to the whims of people I don't know and have no influence over. Meanwhile, all of my other options for discussion will slowly start disappearing as it becomes too costly to continue operating. There are better options than allowing that future to happen.


> We're discussing law in the comments here like we're all lawyers

Even worse, we are discussing the future social effects of changes to law as if we were psychics. Not even lawyers or the best judges could claim to do that correctly. The default position seems to be "revoking Section 230 will ruin the Internet". I'm honestly trying to see how and I just don't see it. It would change the Internet and it would make certain classes of business more difficult.

> Facebook will only become more centralized as other online communication platforms are unable to bear the costs of publishing user generated content

I don't see a substantial change in Facebook's position either way. Is everyone still waiting for Mastodon to usurp it? Or maybe we dream of some young, ethical startup to win the hearts and minds of the globe and show us all how to be benevolent in this space? The idea that Section 230 somehow helps create the conditions that gets us out of the mess we are in is a pipe dream. I would love someone to sketch me out a plan, based around the legal protection around aggregating user generated content provided by the current laws, that slays the beast of Facebook.

> Do you trust Facebook to not start censoring links to external websites?

Not anymore or less than I trust Facebook to show my posts on anyone else's feed. In a world where there is Gigabytes of content generated each day, more content than any human could possibly digest, Facebook necessarily shows you some slice of it. The fact that we don't hold them accountable for the slice they choose is frankly crazy to me.

> How are content aggregators going to exist? Forums? Chat rooms?

Should /unmoderated/ content aggregators, forums and chat rooms exit? This is a fundamental question which I am scratching at.

It seems people are making a motte and bailey argument here. They seem to suggest content aggregation couldn't exist without Section 230. This is hyperbole. It would mean the platforms that aggregate user generated content would be forced to strictly moderate it or face legal trouble.

> How is Google going to exist if it's liable for what it links to?

That brings us firmly in the territory of law that I am not familiar with. I know there are cases where the question of publishing links and how that relates to content and/or copyright is grey. However, what liability I would face if I were to post a link on my own blog to someone else's content is something I have no knowledge of.

> Meanwhile, all of my other options for discussion will slowly start disappearing as it becomes too costly to continue operating.

I'm not sure this is necessarily true. Outside of reddit and hacker news I can't think of any other space I even bother posting anything. The majority of my meaningful communication is done 1-on-1.

We're treating a specific class of communication as sacrosanct. Not even: I'm free to say what I want. But rather, I'm free to create open public spaces where anyone else can post anything they want. We're talking about a very specific kind of thing and I'm unsure if that specific thing is worth having at all.


I’m with you. I think we would all be better off without forums for people to anonymously yell at each other.

If Facebook, Twitter, and Reddit all disappeared tomorrow it would be an enormous win for American citizens. The path to unification and healing does not run through big tech.

And yes, I understand that it would also be the end of HN. I accept that.


Having read it over and thought about it I don't think I'd want to give up all the freedoms we have on posting whatever the heck we want. If it means Facebook ends up with hate speech and antivax groups so be it. I think the alternative is worse: where if I were going to build some type of new forum I'd have to be liable for everything anyone posts. I want to see new ideas being tried without fear of litigation.

That scary thought is the reason why I sympathize with big tech even though they don't always do the right thing with bans and restrictions. They are literally in new grounds. We haven't had this world of technological discussion available to us ever before. Most things are being done for the first time ever right now.

I remember how tought it was to moderate IRC channels that were larger than a certain amount of users. Imagine having to wrangle HUNDREDS OF MILLIONS of users by trying to outmaneuver all the bad faith, harmful actors.

I'd rather live in a world where people can create websites and moderate them as they wish, since the alternative is probably no website at all since you are bound to run into bad faith actors in life.

As long as we can still freely create websites online there should not be people who are against moderation.


Though the law makes no distinction of publisher vs platform, I think people's intuitive sense that something is "wrong" is valid even though they may not be able to express it clearly. Publisher vs platform is just the easiest/closest way to express it for them.

For me, the problematic/key question and example is Facebook's (News) Feed. When content is collected, curated (algorithmically) with specific intent, published/presented in a particular order and layout to communicate and derive revenue, at what point is it a creative work with authorship?

If I prompt 100 people to comment on a topic by placing information in front of them, and then take portions of those comments, reorder and present them to you shaped by an overarching narrative of "what you may find interesting related to this topic" and place it on the front page of my website, in what way is this different than a newspaper?

A newspaper can be sued for defamation, however, Section 230 (c)(1) shields Facebook from any liability in the case where this selective curation and display of information contains known falsehoods or defamations. If any reasonable curator of facts (reporter) or newspaper editorial board would identify and reject these falsehoods or otherwise be sued, does Facebook get a pass because it was a computer curating?

*Edit: The reason I think this may be problematic is that it removes any check on purposeful misinformation that has traditionally existed on our previous methods of speech amplification (newspapers, tv, radio). Facebook has no incentive not to publish the most engaging information even if it is false, as it cannot be sued. If it could be, you would see it actively prevent misinformation. The standard would be what the courts would find it responsible for under existing libel laws, which is a difficult bar to clear, particularly for public persons, but is the only restraint on yellow journalism we've traditionally had.


This is one take, but another take is what Section 230 should be, putting aside legal technicalities. We have expectations on how big tech companies should operate, and they are not being met. Facebook is bigger than any country. Twitter is bigger than most. They are the digital public square.

Even if US law permits them to act as they will, it is dangerous for our society to have organizations that are essentially utilities provide a non-neutral platform. It doesn't make a difference if a private company is censoring you - the distinction is just cosmetic. The impact is as real as a government censoring you, since any alternative avenue of speech is significantly less effective and for most intents and purposes, simply doesn't exist.


Facebook and Twitter are absolutely not even close to being utilities. Utilities are essential services AND have monopoly power. Facebook and Twitter are neither essential services nor do they have monopoly power (there are a billion other websites you can post on, and the barrier to entry to creating your own website is close to zero).

Being big is not the same as being a monopoly. McDonald's is big, but they are not a monopoly because they have a lot of competitors. Regulating Facebook or Twitter as utilities would be as dumb as regulating Mcdonald's as if it were a utility.


I think that Twitter and Facebook are a lot closer to the United States Postal Service than they are to McDonald's. Imagine getting banned from sending/receiving mail and having your home or business address "delisted" by a private company, and then you get closer to what is going on.

If you get banned from Twitter, you can go to Facebook, just like if you get banned from McDonald's you can go to Burger King.

If you make a habit of shitting in the dining rooms, you'll likely find yourself banned from all the restaurants eventually.


The false equivalence between the right to due process in accessing the world's de-facto global communication platform and eating junk food is honestly sickening to me. The power to say anything you want to anybody in the world has fundamentally different consequences for every society on Earth than the power to slowly poison yourself in the colorfully branded plastic seat of your choosing.

If your problem is the power and reach that these companies have, then fix that problem. Break them up; mandate open communications protocols; create a gov't-owned communications platform. Destroying UGC on the Internet, or passing blatantly unconstitutional laws, isn't going to fix the problem.

> the world's de-facto global communication platform

Sorry, was that Twitter? Or Facebook?

It's kinda hard to argue it's a monopoly when I can't figure out which one you're referring to and you're saying "Twitter and Facebook".

Meanwhile, this is why the USPS isn't a great comparison: https://en.wikipedia.org/wiki/Private_Express_Statutes


If the person you responded to had said "monopoly," your comment would only be pedantic (and maybe bad faith), but they didn't say that - it's easy to argue with a straw man. Don't demean yourself.

The USPS is

1. A government organization

2. An essential service

3. A de facto monopoly in many rural areas that are not profitable for private companies to serve.

Facebook/Twitter are none of these things.

HN: Facebook and Twitter are a stupid, pointless waste of time and you should delete your account and leave those platforms.

Also HN: Facebook and Twitter are essential services and banning people from Facebook and Twitter is a fundamental violation of their human rights.


How does this work for a "Letters to the Editor" type section in a newspaper?

Or what separates a heavily moderated online forum from a volunteer run online magazine? Is it the asking for submissions part?


Newspapers are publishers. They've never tried pretending they're platforms. They have editorial control because they are clear and forthright about who they are and what they do.

There is no difference between a "publisher" and a "platform" under section 230.

We're talking about amending section 230. Maybe there should be.

I genuinely don’t mind if the internet is fundamentally reset. I am completely unconvinced that Facebook, Twitter, Reddit, and other such services are net benefit for humanity. Being able to yell at each other in giant anonymous forums is not something that I think worth protecting. Perhaps we should go back to peer to peer communication.

So to those who keep saying “the internet as we know it as at stake”, I say... so what? Maybe we got it wrong.


You don’t speak for we.

I enjoy Facebook, Twitter, Reddit, etc. You don’t have to use them. Why can’t you respect that not everyone likes or wants to use the same websites that you do?


You just have missed the I. There were several.

So... no user generated content at all? That seems to be where you are headed here. You mention peer to peer, but where do I host content then? Even my ISP could be liable for allowing access to it.

Distributed and decentralized networks and databases. Communities can form and police themselves, but no central servers or authorities will be involved. It wouldn’t be the end of internet communities but it would be the end of centralized, public, internet communities.

So no public internet? You only forward packets for people you trust? This seems... less than optimal.

No. Public internet will still exist. You can write your blog, you just can’t have a public comment section. Most of the internet would still exists. Public centralized forums, such as HN, would go away. But there is nothing that would stop people from starting decentralized communities.

Url changed from https://popehat.substack.com/p/section-230-is-the-subject-of..., which points to this and a bunch of other articles, and doesn't add much otherwise (other than Popehat-style snark).

If one of the other URLs is a better fit, we can change it again.


Isn't this an example of what we're talking about? Even if this change is objectively better, there were many comments posted here with the original URL as the intended target and now their comments are directed at a different URL, all done without their explicit consent?

Yeah, site operators will exert editorial control. This is quite a pertinent example.

I think Popehat is great, sometimes, but I also like HN's anti-snark guidelines because it brings the conversation up a level. Would I have made the same choice as dang today? Maybe not. Do I appreciate his transparency? Yes. Do I agree that it fits with the site guidelines? Yes.

As for the comments... I've read them a few times throughout the day. And lemme tell ya, folks are rarely responding to the Popehat article. But, given that the Popehat article was essentially a bunch of links to other articles after a couple of snarky paragraphs framing the issue, that's not too bad. At least one dead comment was griping about Popehat. Womp womp.

Was the change made without the explicit consent of every user? Um... Yes. But what entitles us to that level of control over HN? This isn't a direct democracy, it's a news aggregator. This site also allows us to edit our comments, without the explicit consent of every person who responds to them.

Would it be appropriate to sue HN into oblivion over this? Please, oh please, no.


> But what entitles us to that level of control over HN?

What entitles you to that level of control over email? If any of the emails you sent were modified by Google, you would be outraged, but we should just accept "editorial control" over Twitter, Facebook, HN, etc.?

I think that's the real debate happening here. It's not just about censorship, it's about exercising control over the content. Many feel strongly that this sort of control should be in the hands of the creators of that content, not editors/moderators/site owners/etc.

> This site also allows us to edit our comments, without the explicit consent of every person who responds to them.

I would be ok if the person who originally submitted the article made that change himself and there was a history associated with the submission that showed the change. That's the creator exercising control.

> This isn't a direct democracy, it's a news aggregator.

It's kind of acting like a newspaper though? If content is being edited, that's the job of an editor. I don't think it's such a far stretch to imagine top comments being edited to improve the conversation, etc. at that point, what's the distinction with a newspaper?


I do not recognize posting a link, with its verbatim title, to a news aggregator as a "creative work."

In the couple of years that I've been here, I can only recall one instance of dang editing somebody's comment. He explained precisely what was changed, it was because a typo or something was causing the conversation to go sideways, it was after the edit window had closed, and the author thanked him for making the correction. As far as I can tell, the site maintainers are quite committed to transparency of that kind.

Maybe a "history" feature would be nice. OTOH, I appreciate that HN takes a very slow and deliberate approach to adding features to the site. Personally speaking, most social media crashes my phone browser, and HN is beautiful in its simplicity.

> It's kind of acting like a newspaper though?

If newspapers have live comments sections and don't contain the text of a vast majority of their stories... um, no. This isn't at all like a newspaper.


For sure, that's a hazard of changing urls and titles. It's just that the cost of not doing it is greater overall than the cost of doing it. I hope it's at least clear that we do this all the time, on every topic under the sun, it's not specific to this topic or any political cause.

We usually post in the thread that we changed the URL and/or title and trust readers to be smart enough to figure it out. Some cases are worse than others, and in some of those I'll add replies to particular subthreads explaining that they were posted before the URL or title was changed.


No.

I kind of intended to submit Ken's opinion but I see your point.

If platforms are going to start acting like publishers, they should no longer get special treatment when compared to other publishers.

Remember, if the info platform monopolies help the democrats today, they can help the republicans tomorrow.


From Popehat's post:

> Among the most common lies: Section 230 requires sites to choose between being a “platform” or “publisher”

The idea that Twitter moderating its users' posts means it's acting like a "publisher" is nothing but Republican propaganda[1]:

>Furthermore, a number of senators have prominently criticized Section 230. For example, Senator Ted Cruz (R-TX) repeatedly (but completely falsely) claims that Section 230 only applies to “neutral public forums."

Threatening Twitter and Facebook with liability for their users' content is authoritarian suppression of free speech. The US government should not be forcing Twitter or any other social media service to carry the president's re-election propaganda - even if the propaganda is factual, let alone if it's full of holes and lies like the NY Post story.

[1] https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3306737


From the comment:

> If platforms are going to start acting like publishers, they should no longer get special treatment when compared to other publishers.

The GP doesn't argue that Section 230 says this or that, they're arguing that internet companies who act like news sites should be subject to the same laws as news sites.


The problem is that Twitter and Facebook are not acting like news sites because

1) they don't write the stories or in any way pay the journalists

2) news is a minority of the content

3) moderation is not the same thing as curation

4) hosting a story is not the same thing as publishing it.

If I post an NBC News article to Twitter, NBC is the publisher. If that article contains libel, NBC is the one on the hook in court, not Twitter. (However, if Twitter discovered the article was very likely libelous then it would be both reasonable and responsible to restrict sharing the article).

GP is really making one of two authoritarian arguments:

a) Platforms are not allowed to make broad decisions about what sorts of content they want to host. Presumably GP would then also agree that YouTube's ban on pornography means that YouTube is a "publisher," and that every time Reddit remove a racist subreddit that it is acting like a "publisher."

b) If a platform does not want to host the president's dishonest re-election propaganda, they should expect to face financial and legal consequences.

Of course nobody would really say "b" out loud, hence the word games about "you see, Mastadon is a platform but Twitter is a publisher."


The fact that you're being downvoted just shows the partisan reflexes of those that did it.

Making a factual correction without any additional commentary. Down the memory hole.


I think this comes down to the fact that removing lies and fake news hurts Republicans at this point in time.

If in the future the Democrats are the lying ones, then those lies deserve to be removed too.


This tidbit from a WSJ story is rather revealing of the actual issue going on here[1]:

> In late 2017, when Facebook tweaked its newsfeed algorithm to minimize the presence of political news, policy executives were concerned about the outsize impact of the changes on the right, including the Daily Wire, people familiar with the matter said. Engineers redesigned their intended changes so that left-leaning sites like Mother Jones were affected more than previously planned, the people said. Mr. Zuckerberg approved the plans.

That is: Facebook decided to intervene to benefit the right. I don't think this is just because of right-wingers at Facebook: surely a large part of it is bad-faith attacks from people like Ted Cruz.

The idea that Twitter and Facebook are conspiring to suppress legitimate criticism of Biden and thereby defeat Trump is plain ridiculous.

[1] Story is here: https://t.co/sjOYrLQdc3?amp=1 but I got the blurb from this tweet: https://twitter.com/patcaldwell/status/1317140564169625600


How do you know who is actually lying if half the information is removed from the sources you usually read?

> How do you know who is actually lying if half the information is removed from the sources that you usually read?

As far as I know, they're not broadly removing "half the information" (which I'm taking to refer to conservative viewpoints), but disinformation related to QAnon, voting, covid, etc.

Disinformation is not something that will help anyone make better judgements.

https://apnews.com/article/election-2020-media-social-media-...

https://www.bbc.com/news/world-us-canada-54443878

https://www.washingtonpost.com/technology/2020/08/11/faceboo...

https://www.cnet.com/news/facebook-twitter-block-trump-post-...

https://www.reuters.com/article/us-facebook-election-exclusi...

https://www.theglobeandmail.com/world/us-politics/article-fa...


Platforms and publishers are no different under Section 230. Platforms are not getting anything special.

False. "No provider or user of an interactive computer service shall be treated as the publisher..."

The whole point is that the provider of the interactive computer service (ie "the platform") is not to be treated as the publisher of anything anybody says on the platform.


> Remember, if the info platform monopolies help the democrats today, they can help the republicans tomorrow.

They do?


Case in point: two town halls yesterday. One gets asked about their most current scandal (tax returns), another doesn't get a single question about their scandal (burisima). The media is undoubtably carrying water for Joe Biden. That isn't even arguable anymore. The question is why?

That just demonstrates that there isn't an easily definable political neutral. From my point of view, the "Burisma scandal" got all the attention it deserved. The reason the media isn't harping on it is because it wasn't a scandal, and one political party was desperately trying to make it so.

In the same way, a lot of people would say it would be neutral for media to present arguments that global warming is not man-made, but people who care about scientific fact would claim that even presenting the skeptic argument is non-neutral, sense you are signal boosting an argument with no basis in reality.


Jesus Christ I haven't realized just how far we have fallen until this comment. I don't necessarily blame anyone for thinking the way they do, it just baffles me.

For the endless commentary on Trump profiting off the presidency, Trump running an "organized crime family", Trump this Trump that, we have actual hard, concrete evidence that a Vice President's cocaine addicted son was selling access to the office (presumably to fuel his addiction), and on top of that his father lied constantly to the American people about it.

How is this not a scandal?


> Remember, if the info platform monopolies help the democrats today, they can help the republicans tomorrow.

Facebook has had its thumb on the 'balance' scale in favor of ultra-right-wing sites for half a decade, at least.

- Zuckerberg calling a group of non-profit news sites "not real news" [1]

- Zuckerberg ordering the algorithm of the news feed to be biased toward promoting Breitbart et. al. [2]

[1] https://twitter.com/mathewi/status/1317124357873860609

[2] https://twitter.com/dnvolz/status/1317160066047479809

[3] https://www.wsj.com/articles/how-mark-zuckerberg-learned-pol...


This is exactly the misconception that Ken is trying to correct. There is no difference between platforms and publishers according to Section 230, and there is no "special treatment". This meme has sprung up out of nowhere because people are angry at the social media companies and want to think that anger has legal backing, but it doesn't.

No. It's not what Ken is trying to correct, because this is an opinion, not an interpretation of the law.

Furthermore, as for the law, there is a difference between platforms and publishers. Section 230 says that platforms will not be treated as publishers.


For the people downvoting my comment, please note the following from Section 230:

"No provider or user of an interactive computer service shall be treated as the publisher..."

FB and Twitter are interactive computer services in this case, and we call them "platforms." The law says they are not to be treated as the publisher of the content that users post on their site. Thus, there is a big difference between a platform and a publisher. That's the whole point of the law.

Downvote all you want, but... that's the law.


You have to read the whole sentence:

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

The law doesn't establish two categories, called "publisher" and "platform." It says, if User writes something on Website, then Website will not be treated as the publisher of whatever User wrote. Instead, User will be treated as the publisher. It defines who is responsible for the content that is served by Website: the User who wrote it, not the Website that served it. At no point does it create a category called "publisher" who is subject to different rules from a category called "platform."


>> At no point does it create a category called "publisher" who is subject to different rules from a category called "platform."

Section 230 does not need to create those categories because they already exist under the law. Historically, content providers have been treated as either publishers, distributors, or platforms, and there are different rules for those categories.

If a law is saying someone isn't going to be treated as a publisher, it is implicitly saying they are going to be treated as a distributor or a platform.

Section 230 says that internet content providers aren't going to be treated as publishers of user content, while the same law also says that internet content providers will have some of the rights of publishers - for example, by moderating content.

Under Section 230, internet content providers are treated as distributors in some cases, for example where upon request they need to remove content that violates copyright, but not liable as long as they do so. They are treated as platforms in other cases, for example defamatory content. Although in some ways they have even more rights than offline platform providers - traditionally platform providers have a legal requirement to accept all traffic.

So 230 gives internet content providers the privileges, but not the obligations, of traditional publishers, along with the privileges, but not the obligations, of traditional platform providers.

The reasons this was done are spelled out in the findings and policies section of the law. Some of the reasons no longer make sense - I don't really think we need government policies at this point to "to promote the continued development of the Internet". And some of the things that the act called out as beneficial about the internet are being harmed by the current actions of internet content providers. We are seeing them act less and less like "a forum for a true diversity of political discourse".

That's why people are talking about modifying Section 230. If you get the benefits of a traditional publisher, maybe you should get the obligations as well. If you get the benefits of a traditional platform, maybe you should get the benefits as well.

And yes that would be a huge change in the way content is provided on the internet.


“Publisher” is a category that already exists elsewhere in law. Publishers can be sued for publishing slander/libel.

The whole point of the law is to say that website owners need not worry about being considered a publisher when they let other people post or comment or whatever.


The text is very clear. It protects the rights of an owner of a website to control that website, which is their private property. It does not in any way heap additional responsibilities or legal vulnerabilities upon “publishers”. It’s straightforward if read and interpreted in good faith.

The legal vulnerabilities of publishers already exist. They don’t need to be created.

I'd suggest reading through some of the resources linked in the article. The idea that Section 230 gives special treatment to "platforms" as opposed to "publishers" is common but false. The Techdirt one (https://www.techdirt.com/articles/20200531/23325444617/hello...) in particular I think helps clear up the confusion.

> To be a bit more explicit: at no point in any court case regarding Section 230 is there a need to determine whether or not a particular website is a "platform" or a 'publisher.' What matters is solely the content in question. If that content is created by someone else, the website hosting it cannot be sued over it.

> Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you'll understand most of the most important things about Section 230.

The way I understand it, these big sites aren't simply hosting content, they are themselves creators through editorializing content and so should not enjoy a blanket immunity.


> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

If they don't get special treatment as opposed to publishers, why is explicitly mentioned that they shall not be treated as the publisher? If there's no difference, what's the point in that?


"Publisher" here doesn't refer to a special legal status. This sentence just means that, if you post something on a website, it's still you and not the website who said it. This law is applicable to any website, even ones which make no attempt to be a neutral platform.

Probably because publishers are, by default, more liable (as liable as any other legal entity) and thus the law aims to reduce liability for the parties in question (the platform creators like twitter, facebook) so that the positive economic impact attributed to these platforms isn't burdened by the law.

I feel like this guy still doesn't get the actual point why politicians are talking about section 230. They aren't directly rebutting the language of the law, they're threatening to amend/repeal section 230 since they know social media giants depend it to exist in their current form.

one of his links (EFF) mentions "Fair and transparent moderation" ... I'd be happy to see that, but aside from it being something societies have had problems with in the past; can anyone actually suggest that's what's happening now?

Are twitter and facebook being fair and transparent?

Isn't the very fact we're having the argument evidence that it's such a difficult problem we need some interim solution while the perfect AI algorithm gets worked out? Some solution that can last for a while if the perfect AI moderators never come.


They're using revocation of 230 as extortion to try to abridge the freedom of the press.

Do you think conservatives are being censored by the big tech platforms? Would your answer pass the veil of ignorance?

Well, conservatives should have thought about it like 10 years ago when that cabal against conservative point of view started on social media or in the Silicon Valley. Instead they chose to rely on businesses that were largely operated by "progressives" whatever it means today. They had an opportunity to create conservative social networks, they didn't. This isn't a good reason why section 230 should be revoked. Yes, Twitter is biased against conservatives, but that's OK, conservatives should go create their own Twitter, that's the point of the web anybody can create their own thing, the more competition the better.

Everybody is being censored on all these platforms, it's called moderation. Most people appreciate it, including you or you would be commenting on 4chan instead of the heavily moderated HN.

That's really a manipulation of the term "moderation". They aren't removing swear words or nipples... They are deciding what is the "correct" information and then decide what we all get to see. If they got out of business because they abused the immunity grant they were given so be it. I won't miss them. All of social media can burn for all I care.

> Section 230 should be revoked immediately

No, instead it should be expanded to every publication.


Why are so many replies under this post shown in faded gray?

Comments with net negative votes get faded out. Remember that HN is just as much of a political battleground as any other forum (don't fall into the common trap of thinking that tech folks are "above politics"), and that those who benefit from the exceedingly effective propaganda mentioned in the OP aren't going to quietly yield control of the narrative.

This is from other users (with a certain amount of karma) voting down these comments. See "Why don't I see down arrows?" on https://news.ycombinator.com/newsfaq.html#:~:text=Why%20don'...

Because there are a lot of conservative Trump-supporting HN posters in these comments downvoting everybody who doesn't want the Internet destroyed.

It's somewhat ironic that HN's moderators apparently changed the title of this submission from "Section 230 Is The Subject of The Most Effective Legal Propaganda I've Ever Seen" to the apparently less objectionable "What Section 230 really means"

That's just bog standard HN moderation to make titles less baity. If we didn't do that, HN would be a completely different site—and not in a way that most HN users would appreciate. Having the titles on the front page be 'bookish' (to use pg's original term) is one of the core principles here.

We've since changed both the URL and the title.


Fair enough! It just feels like a funny meta example of why one might want to "editorialize" content without also being forced to moderate every single thing posted.

Moderators, stop editorializing via title manipulation! The title of this article is “Section 230 Is The Subject of The Most Effective Legal Propaganda I've Ever Seen”, and the article only makes sense if read in light of its actual title as given by the author, not the edited manipulated title you have given it on HN!

It was an obviously baity title and that is bog standard HN moderation. More at https://news.ycombinator.com/item?id=24805143

We've since changed the URL (and the title) in keeping with another HN principle, of favoring original sources. The Popehat article is really just a list of links with a bunch of extra Popehattiness.


Unbelievable. The next time another reactionary outrage-bait piece from Quillette or Unherd or Matt Taibbi or the National Review worms its way to the front page are you going to swap out its url for a totally different article on a totally different website only nominally on the same topic?

The point of the original article wasn’t just that people misunderstand section 230, it’s that republican politicians are conducting a propaganda campaign to willfully misrepresent section 230–and every thread on HN where someone launches into that tired and fallacious “publisher vs platform” spiel is evidence that it’s working. Facts aren’t inherently clickbait just because they displease the conservative HN massive.


I respect how strongly you feel about this. I don't think the situation is as you describe it though. Basically everyone with strong political convictions is furious at HN for being outrageously (as they feel) slanted toward the opposing side. But it simply is not so. I suppose this is a consequence of the "if you're not for us you're against us" mentality, which is the essential political stance, though growing in intensity lately.

The reason we do these sorts of edits is not driven by politics but by the attempt to optimize HN for curiosity (https://hn.algolia.com/?query=curiosity%20optimiz%20by:dang&...). The principles of how we do that have been worked out over the years, and they're not derived from political positions. Curiosity likes to cut across such boundaries—being limited by boundaries is not in its nature.

I get that above a certain threshold of political passion, the feeling becomes that the site ought not to be optimized for curiosity, but rather for political justice or some value like that. That's understandable—those are also good values. HN would just be a totally different kind of site if we did that. The question then is whether a site dedicated to curiosity has the right to exist on the web or not—including under current political conditions. I think it does. Why shouldn't it?


That’s great, but it didn’t answer the question. Next time “passionate” supporters of racism post an anti-affirmative action screed, or “passionate” supporters of the “anti cancel culture” movement post some incoherent mess about how they should be allowed to say repulsive things but no one should be allowed to talk back to them, or “passionate” supporters of the president post a link to a tabloid story containing some barely-above-Q-level conspiracy theory targeting the president’s enemies, are you going to replace the links to these stories with more “neutral” sources like you’ve or here? Because you haven’t in the past.

I tried to answer your question by explaining how its premise is untrue, but perhaps that wasn't clear enough. If you're asking me whether we do the same kind of moderation on baity right-wing sources, the answer is of course, we do that all the time. If you're asking whether we'll do that on every future article that you might see as opposing your political beliefs, probably not. How we make these calls depends on the article. For sure we make some of them wrong, but you seem to want us to change the entire system by which we make them, and I've explained above why that would be bad.

The main thing, though, is that your question doesn't feel like a question. To me it feels like you're trying to conscript me into political battle for a position I don't occupy to begin with. HN commenters who feel strongly on any topic (not just politics) sometimes make stories in which, for whatever reason, I, or rather "dang", gets cast as the enemy. That's a job hazard and inevitable, but those stories are not mine and that's not me.


Not sure if this is satirical, but HN aims to prevent clickbait and other 'spam' - see https://news.ycombinator.com/item?id=6572466

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: