> (c) Protection for "Good Samaritan" blocking and screening of offensive material
> (1) Treatment of publisher or speaker
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
> (2) Civil liability
> No provider or user of an interactive computer service shall be held liable on account of-
> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or
> (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
It’s also worth mentioning that before section 230 if you didn’t moderate you weren’t liable so in certain senses it’s a censorship bill rather than a free speech one since it protects removing speech. That being said I do understand the need to moderate sites and remove some content, hence my proposal of the modified version rather than a call for its elimination entirely.
So recreate the DMCA Takedown process but for speech? Do you think the DMCA is working well for copyright holders and users?
The abuse of this would be massive. Let's say I don't like the comments you wrote so I email the host of the forum they're on and say they're defamatory. Now the host has to decide if they are defamatory (which is often a tough call even for lawyers) and also weigh the risk that I might file a costly lawsuit anyway. Or they just delete the comment.
That’s a case of harassment and possibly some form of identity theft.
I don’t find your proposal tenable and it would obviously be prone to abuse.
Right. While the text of the bill doesn't remove distributor liability (only publisher/speaker), its applied as doing that to, and giving even actively-moderating sites only neutral platform liability. There may be justification for this in legislative history and legal construction, so it may not be a pure judicial mistake, but from a policy perspective its at least arguably an overcompensation that Congress should correct, a correction which would be much more modest than many of the reform/repeal Section 230 proposals but probably hit a better point in terms of dealing with the worst problems without creating more than it solves.
> It’s also worth mentioning that before section 230 if you didn’t moderate you weren’t liable
That's not entirely true. If you did actively moderate, you were liable as publisher, but if you didn't actively moderate you would still likely be held liable as a distributor.
> in certain senses it’s a censorship bill
The entire Communications Decency Act was a censorship bill and the express purpose of 230 as part of the CDA was to encourage sites to do moderate content instead of taking a hands-off position.
OTOH, so long as there is liability for knowing-unlewful content (distributor liability) and antitrust enforcement, I think that's a good thing and reduces the social pressure for government to push the maximum line the courts will let it get away with in terms of government content restrictions.
In that scenario, I don't know what a reasonable level of effort for Grindr to exert is. It seems infinitely unreasonable to make them liable for any failure; there is a determined person on the other end that will probably eventually find some way of adding spaces or using symbols instead of letters, or using weird UTF-8 symbols or something.
I don't see Grindr as failing there; while they probably could have done more, they seem to have made a best faith effort to stop it. The police should have intervened and filed charges against the boyfriend for stalking and harassment. Even failing that, I would have filed a civil case so I could subpoena the logs from Grindr and used them as evidence in a restraining order.
Grindr is not the appropriate party to resolve this. I don't call Ford when people drive their trucks like assholes. I don't call Glock when somebody shoots someone. If you're going to call Grindr, you might as well call their ISP and Google too, see if you can get the ISP to block Grindr or get Google to route Grinder to localhost. They're complicit in enabling this too.
> Right now courts are applying the liability so broadly that companies aren’t liable even after they are notified about illegal behaviours on their site
This, to a degree, makes sense. They haven't been notified about illegal behavior on their site, they have been notified of allegedly illegal behavior on their site. Grindr is well within their rights to say that they don't believe that the profile violates any laws. For example, it says that he attempted to file for a restraining order and was denied. So that court either found that what the ex-bf was doing wasn't illegal, or that he failed to meet the requirement of a preponderance of evidence. So he failed to convince a judge that his ex was more likely than not stalking him. Should Grindr be required to take action on a claim that is more likely false than true?
> I’d be all for a modified version of section 230 that required sites to have a contact email and made them liable if they don’t address certain issues in an appropriate time period.
That's fraught with issues. What counts as addressing the issue? Is it banning the profiles as people identify them? Is it banning the personal info from appearing in profiles? Do they have to hire a group of people to memorize all the bits of bad data, and check new profiles and profile updates for those snippets, as well as any clever encodings that a computer wouldn't recognize?
What is an appropriate time period? Is it some flat period, like a week, regardless of what changes are required? Does it vary, and if so, who decides what's a reasonable amount of time?
This is not to mention that literally none of this goes through a court, which is terrifying and exceptionally prone to abuse. Of course, it could go through a court, but we already have laws and remedies for this situation in court.
Cases like that make it seem really cut and dry, like there would never be a grey area. Even ignoring cases of outright fraud, what do you do in situations where one side feels victimized but it doesn't actually meet any legal standards? Like if person A always replies and argues with person Bs tweets. When person B blocks person A, they make a new account. Person B says they feel harassed and wants to force Twitter to do something about it. Person A says that Twitter is a public forum, and that if people don't want other people to disagree, they should use a more private forum. It never goes further than that. No threats, no doxxing, no real life interactions. Person A is probably an asshole, sure, but I don't think section 230 grants you immunity from assholes. I don't think it counts as stalking or harassment either (though I could certainly be wrong, not a lawyer). Should we really allow Person B to force Twitter to do something without having a judge involved? I would really rather not give the Twitter lynchmobs yet another way to dispense their own vigilante justice.
More practically, the technically literate would go back to the world of Usenet, mailing-lists, and minimalistic forums like HN, hopefully inventing distributed reputation systems in the process. I have this vague idea for PGP web of trust-like signing of Usenet posts (published as hidden posts when readers +1/-1) which are then SPAM scored based on the depth of the attestation chain to the reader's own trusted posters, which may have been seeded from one or more centralized databases of group maintainers, similar to the current registration system for moderated Usenet groups except you could freely choose alternative registrars.
It sounds like you're basically suggesting that making the internet useless is a good thing, because maybe something something cool will come out of the ashes and there's a chance it could be even better after a bunch of extremely hard and broad problems are solved. I don't like those odds.
Big, diverse sites like Facebook and Twitter need Section 230 because they can't effectively use human moderators to sift through the content. They have to rely on machine learning, which has false negative rates magnitudes higher than a human. Yet at the same time, they're constantly trying to shape and edit and, basically, narrate the user content, as part of their monetization strategy. That's their dilemma.
Moreover, the distinction between publisher and distributor will still exist. The alternative to strong moderation is no moderation--you're just a distributor, like a Usenet node or the telephone company. But that's more difficult to monetize. (Of course, the legal landscape would be more nuanced than that--traditional libel law wouldn't demand a simple dichotomy between moderation and no moderation.)
Without Section 230 companies would have a more difficult time trading profit potential for legal liability, but it would still be done. Newspapers, write-in columns, bulletin boards, and other forums were around for centuries, all the same exposed to libel law. Even the internet was around for decades prior to Section 230.
I don't understand how you reached that conclusion. They could be sued over any comment that appears for any amount of time. There are definitely comments that have appeared on HN that are libelous.
Moreover even if they pre-screened every comment before it was posted with a team of lawyers who never make any mistakes, they'd STILL have to worry about defending against frivolous lawsuits. Would it even be possible to buy liability insurance for a forum in this world? It would cost a fortune.
And this is for a site that has the resources to have full time moderators. Smaller sites are even worse off.
I don't see how anyone could practically operate any forum or discussion board or comments section that allowed people to post messages in real time.
>Even the internet was around for decades prior to Section 230.
Sure and sometimes your ISP got successfully sued because someone didn't like a comment posted on a message board they hosted.
Not just have appeared, but which are still on display.
Sometimes the difference between libellous and a critical statement protecting the public is purely the difference of the statement being true or not.
This is not something a moderator is necessarily in a position to be able to judge but it's critically important to a community that its members can communicate true negative facts about other members.
Or whether the person making the statement knew that it was false at the time (or should have known). That won't protect the statement from being libelous, but it limits your maximum liability to actual damages that you can show (which in many cases is only going to be the lawyer fees; in the case of a widely repeated libelous statements, how do you determine what harm came from which sites?).
> This is not something a moderator is necessarily in a position to be able to judge
I would make that a harder statement. Moderators cannot tell whether a certain post is libelous. Even assuming that the moderator knows whether the post is factually true or not (and there are a lot of accusations where, when they come out, no one knows for sure who is telling the truth), whether it's libelous depends on whether the person in question is considered a public figure, and whether the person that posted it did enough fact checking to be deemed sufficient in attempting to prove or disprove it. The only person that can determine whether the person being defamed is a public figure, and whether the burden to verify the facts was met, is a judge with jurisdiction over the case. Anything else is just people guessing at how a judge would interpret this case, which is fraught with problems, including and up to that two judges who have jurisdiction would disagree on some facet of that.
1) You could be sued now for [potentially] libelous comments you write on HN. What's the average wealth of HN posters? How many times has HN had to field user account disclosure requests so commenters could be sued?
2) There are scenarios where HN could be sued now for [potentially] libelous material. For example, in the way moderators reword titles. Not sure how likely they would be to succeed, but it's certainly plausible, and it would be relatively cheap for a lawyer to test the waters. I'd be curious to see how many letters Y Combinator has had to field regarding its content. I suspect greater than 0, but still relatively few. Do its lawyers toss them in the trash, discounting to $0 the risk of liability? I doubt it--while they may consider the risk low, it's still something, and that something presumably effects HN's policies today.
A few months ago I learned a memorable phrase from an HN comment: think in probabilities, not possibilities. Regarding Section 230, most people seem to be in a mode of thinking where they simply compare a world with existentially oppressive liability vs no liability whatsoever. The world doesn't work that way, not even U.S. law. We're all subject to the possibility of financially existential liability every time we drive a car, but we're not crippled by it. How many Silicon Valley engineers with million-plus dollar homes and assets even have umbrella coverage? While I suspect the number is far fewer than what would be rationally called for, the reason is nonetheless because the probabilities are far less ominous than the possibilities.
Would HN's liability exposure grow? Absolutely. Would their legal costs, including possible settlements, increase? I would think. How would the site change? It's hard to say, but I'll go on record as saying that I don't think it'd be taken down, and I seriously doubt there would be many, if any, substantive changes to current policies and practices.
> Sure and sometimes your ISP got successfully sued because someone didn't like a comment posted on a message board they hosted.
To be clear, my only claim is that I don't think it would be the end of the internet or even social media. It might be the end of Twitter and Facebook as we know it, but the U.S. grants to participatory websites one of the, if not the strongest defenses to libel liability in the developed world, and yet the internet works much the same everywhere else lacking such a strict defense. Likewise, many people consider civil tort liability entrepreneurially oppressive in the U.S., and yet private enterprise--grocery stores, manufacturers, schools, etc--exist much the same here as they do elsewhere, especially in other developed countries. In fact, often they willfully subject themselves to more risk than they would elsewhere. (That's one benefit of a system that relies on private suits as opposed to regulatory mandates or criminal sanctions.) And yet the worst figures I've seen for the supposed comparative cost to the immensely successful U.S. economy of it's overly litigious civil legal system is something like 5% of GDP.
There's alot of hyperbole and hand-wringing surrounding this issue, and a big reason, IMO, for it relates to our contemporary, radical narratives regarding Free Speech on the one hand and American litigiousness on the other. While anxiety regarding both may be rooted in a kernel of truth, the full truth and reality--legal, political, social--doesn't support the extreme reactions and doomsday predictions.
While I'm not advocating for repeal of Section 230, I'd trade it in a heartbeat for legislative voiding of Qualified Immunity, if that sort of compromise was on the table between Democrats and Republicans. That's the sort of flexible, pragmatic thinking I wish there was more of in our public discourse. But it can't happen if we're all single-issue voters on every issue, which is what absolutist, possibility-not-probability thinking has turned us into.
We should also expect new bad actors to take advantage of this. As long as they can spam libel faster than moderators can delete it, they can force the site to shut down or risk the lawsuits. While I'm sure YC has its share of enemies deserved or not, even perfectly innocent people are attacked online every day for no reason at all.
Whoever controls the FTC will be able to (and will) pressure the major social media networks into acting as a propaganda arm for their political party.
As dystopian as FB and Twitter are today, in this case, the medicine is poison.
Some people, including both Joe Biden and Donald Trump, have called for a complete repeal of Section 230 at various times in the last year.
AFAICT, that characterization of Biden's position is based entirely on a single oral interview response, which quite arguably was not saying that the law should be repeated but that, on the facts of Facebook's specific conduct, and that of some unspecified other platforms, their conduct should be excluded from Section 230 protections because they were knowingly engaging in misinformation.
Note that Section 230 protections in case law are broader than what is provided on the face of the statute; in addition to the "publisher or speaker" protection in Section 230(c)(1); courts have extended it to also prevent liability as a distributor for content, IIRC by synthesizing 230(c)(1) and the good-faith blocking rule in Section 230(c)(2) and some legislative history to add the not-express-in-statute rule that sites are also not liable even as a distributor for the material they don't block, with some exceptions. Biden's statement is consist with restricting Section 230 to what is says on the face, which would be only removing publisher/speaker liability, not distributor liability (which comes about when the distributor has knowledge or legal notice of the legal problem with the content.)
Facebook spreading the above, or other similarly ludicrous information, is likewise protected.
The First Amendment does not protect you saying that if it is false and you have knowledge that it is false or are grossly reckless in saying it without confirming its veracity, see, New York Times v. Sullivan.
Section 230 is what prevents Facebook from sharing your liability, as a publisher, if they relay your saying that in the conditions in which you would be liable for defamation.
Section 230 repeal would just mean lawyers name Facebook under rule of the deepest pockets. It doesn't stop Facebook from saying things about politicians.
Anyway my point was that there have been calls to repeal 230 from across the ideological spectrum.
In the followup, Biden reiterates the conduct condition and the knowing falsehood criteria, which reinforces rather than weakens the impression that he is calling for the protections of Section 230 to be inapplicable to the actor/action in question due to their knowledge, a distributor-like standard, and not for the law itself to be repealed generally.
I suppose you could read the first line of his response to the second followup ("He should be submitted to civil liability and his company to civil liability, just like you would be here at The New York Times") as calling for publisher-like liability if you ignore the explicit references to actual knowledge as the basis for nonprotection in both the original response and the first followup, but I do think that that is the more strained interpretation, not the less strained.
> and the fact that he's declined to clarify his position in the intervening 8 months.
Why would you assume that he doesn't want to clarify because he wants a full repeal? Its not as if there isn't a constituency for a full repeal, especially on the right, and a key part of Biden's strategy is holding together a Bernie Sanders-to-Bill Kristol left-right alliance against Trump. Keeping disagreements the details of his position on the issue (which is clearly peripheral to his platform, on the grand scheme of things) out of the reasons for people to not feel comfortable with him is as plausible a motivation for that regardless of which side of the full-repeal-vs.-reform his preference on 230 sits on.
The DOJ has one: https://www.justice.gov/opa/pr/justice-department-unveils-pr...
The Whitehouse has a somewhat bogus EO https://www.whitehouse.gov/presidential-actions/executive-or...
There have been a bunch of attempts to rewrite and at least a few attempts to just repeal it from both Democrats and Republicans.
If so, please go ahead! But I seriously doubt it. This is a thing political philosophers argue about in journals to this day, that lawyers argue about in SCOTUS cases to this day, and that has been litigated to death in thousands of HN threads over the years.
The question of what "politically neutral" means is perhaps the MOST political question there is. The delineation of political speech from non-political speech defines the playing field.
And even setting aside genuine disagreement, politics does not operate on good faith. It operates on power. In practice, the bill does not outline specific criteria. So "politically neutral" will mean whatever the FTC wants it to mean. Which means it will mean whatever the appointees of the FTC chair want it to mean.
Josh Hawley, of course, knows and understands how power works. He would not be proposing this bill if the big tech companies were right-biased. Democrats also understand how power works. So, in this counter-factual world of right-biased social media, it would be Democrats clamoring for federal intervention and Hawley decrying the "Democrat attack on the most successful American companies". Do you really believe otherwise?
Well, there's a simple answer, but I doubt we'll agree on it. It is impossible for a content-removal practice, algorithmic or otherwise, to be politically neutral. Any such practice will involve (whether implemented case-by-case or encoded into the design of the algorthm) judgements of a political nature and with political impacts.
Right. My point is that Hawley's whole premise of a "politically appointed political neutrality committee" is absurdly transparent.
We can't be terrified of regulating platforms that have massive amounts of control over what most people see or hear about.
1. Maybe, but that's not what Hawley's bill does.
2. Leaving inherently political questions up to the courts invites politicizing the courts -- something that's already happened and that, if it continues apace, threatens to delegitimize and gridlock the entire federal legal system.
3. Given that you're not a Trump supporter or Republican, perhaps you should review the last 20 years of federal judicial appointments before placing so much faith in the courts...
> We can't be terrified of regulating platforms that have massive amounts of control over what most people see or hear about.
Agreed. I think there are lots of reasonable approaches toward regulation and/or self-regulation. The ability of customers to choose from a marketplace of recommendation algos (or implement their own) is the obvious market-based solution.
However, I do not think a politically appointed committee whose job is to define political neutrality is a reasonable approach. And I think that leaving inherently political moderation choices up to the courts would be even worse -- at least FTC chairs aren't lifetime appointments, and at least politicizing the FTC won't deteriorate public trust in the one portion of the federal government that is not yet perceived as nakedly partisan.
1. That means no HN.
2. I normally don't have to remind people of this at places like HN, but... algorithms are written by... humans! Supervised algos use data labeled by... humans!
> Automated moderation should look for identifiable harms (i.e. illicit content, directed threats, terrorism)
Why do you list terrorism separately from directed threats?
What is the line/difference between "terrorism" and an "undirected threat"?
Are militia groups that don't make directed threats terrorists? Are radical religious groups that don't make directed threats terrorists? What if they are run by actual terrorists but none of the speech amounts to a directed threat?
Speaking of which, what is a terrorist organization? Is the KKK? What about small white nationalist or black power militia groups? What about QAnon? What about antifa? What about BLM? What about Westboro Baptist? What about the Black Panthers?
There are people -- elected officials -- who think each of those is a terror organization.
So, defining terrorist organization is absolutely a political fight. Maybe we avoid that and just talk about directed threats/ Ok. Does that mean that Al Qaeda allowed to operate on FB as long as they don't make directed threats? In fact, that FB is prohibited from not allowing Al Qaeda on as long as they don't make directed threats? That seems like not a solution anyone is going to get behind.
We haven't even gotten past the "obviously terrorism=bad" and we already have to declare whether BLM, QAnon, Westboro, or militia groups are "terrorists". Which some senators believe is the case and is a 100% political question.
> illicit content
Is Ginsberg's Howl illicit? Is a picture of two women kissing illicit? What about non-sexualized nude breasts? What about nude male bodies? What about an erect penis but in a non-erotic context? Will the dominant answers to these questions be the same in 50 years?
Lots of people would say a site that allows pictures of heterosexual kissing but not not pictures of homosexual kissing is obviously taking a political position, but that was outside the realm of "political opinion" when I entered adulthood! Any public homosexual display of affection was obviously illicit.
> absolutely nothing should be removed or blocked based on vague and nebulously defined concerns over "misinformation".
What does vague mean? What does nebulously defined mean? What is the difference between misinformation and libel? What is the difference between misinformation and dangerous information? Is it impressible to remove a video that's targeted at kids and encourages huffing glue as a fun and harm-free activity?
Anyone who has moderated a forum knows that such an algorithm is going to have all sorts of holes and perceived biases. I've never written an automod that some user doesn't get pissed off about.
More generally: that's just straight-up moderation, it has nothing to do with tweaks to recommendation algos.
What if Twitter realizes that people leave the site if they see stuff about abortion but stay if they see stuff about LGBT rights? Again, viewpoint-neutral, Americans just one day start yawning about abortion and really polarize on LGBT stuff. Can they prioritize posts about LGBT rights over posts about abortion as long as the content served up on the preferred topic is viewpoint-neutral and the only algorithmic goal is more lingering eyeballs?
If no to that, how about sports news vs. SCOTUS decision news?
If yes to that, what about COVID case counts vs. Jobs Report numbers?
Even more generally: anyone who's stayed up to date on robust machine learning knows that defining good notions of robustness -- and political neutrality is a type of robustness -- is very much an open problem. So even if we had a precise definition of political neutrality, which I don't think we do, "simply create an algorithm that has that property" is very much an open algorithmic problem.
In fact, there are even some impossibility theorems in this space. So even if we can define neutrality in a perfectly neutral way -- which we can't -- this might be like passing a constitutional amendment that demands a voting system has all of: Non-dictatorship, unrestricted domain, monotonicity, IIA, and non-imposition. You can legislatively demand "the perfect voting system", but the universe is not obliged to ensure the existence of such a thing. Same for some types of robust ML, and no one knows which side of an impossibility theorem some precise-enough-to-code notion of political neutrality might fall on.
Which also brings up the REAL question: are tweaks to recommendation algorithms allowed? Obviously we can't ask FB/Twitter to freeze their recommendation algos -- it's their core product. So. If they notice an "obvious bias" and tweak the algorithm to correct for it, who decides whether that was a biased human intervention or a totally appropriate bug fix? Oh, right, a politically appointed FTC.
I think that "politically neutral" is impossible to formalize in code because it is a fundamental contradiction in terms. But even if it does, I suspect that any reasonable lists of formal specifications might be either mathematically impossible to train a classifier to respect or else at least AGI-complete to actually implement. But if you disagree, I'm happy to clone the Github repo and mess around with your proposal.
No, it means less 230 protection for HN. Stop conflating this with destruction of the platform, it's becoming like "net neutrality". Remember when tweaking that killed the internet?
>What is the line/difference between "terrorism" and an "undirected threat"? Speaking of which, what is a terrorist organization?
The government has a clear processes to designate foreign and domestic terrorist organizations.  Let the actual politicians engage in that political fight. Social media companies can use the result.
>What is the difference between misinformation and libel?
Actual malice? If the standard works for newspapers, why can't it work for social media companies?
>More generally: that's just straight-up moderation, it has nothing to do with tweaks to recommendation algos. [...] If they notice an "obvious bias" and tweak the algorithm to correct for it, who decides whether that was a biased human intervention or a totally appropriate bug fix?
None of this relates. Content should not be removed or suppressed based on any political preference or designation, and that includes a fig leaf of facial neutrality. Whether it's recommended to some and not others != suppression, and it's trivial to show that your systems are based on user action not partisan interest.
These aren't sticky questions at all, they're just ways to navel gaze and avoid the obvious solutions that are inconvenient to certain actors.
Really? If HN starts only moderating based on "identifiable harms (i.e. illicit content, directed threats, terrorism)" then it'll quickly become a cesspool and lose the community.
On the other hand, if they continue to apply posting guidelines, how many banned users suing HN over "politically motivated censorship" and shit like that do you think it takes for them to decide it's not worth it? Content removed because someone was an abusive jerk suddenly becomes, in plaintiff's claims, content removed because the moderators didn't like their politics. Now spend your $$$$ to defend against that claim!
You're sticking your head into the sand over what the unintended consequences of your proposals would be because you really really really want to believe it would only have the intended consequences that you like.
(Look at what you do when you bring up newspapers: newspapers have extremely limited user-generated content, because of the standards you're proposing extending. Again: there goes HN.)
The only stuff that would survive would be the stuff with big userbases, big pockets, and the ability to throw a lot of moderating power at stuff. Which all sounds to me more like traditional broadcast media - which is historically claimed to be also unfair to the same conservatives who are making the most noise about this stuff. So... good luck with that.
>> 1. That means no HN.
> No, it means less 230 protection for HN.
I'd be fascinated to hear what dang thinks about HN's future existence if this hypothetical law where "No primary moderation action should be made based on human input" applied to HN.
It seems impossible to (a) run a healthy forum or (b) avoid lawsuits or even jail. E.g., can you link me to a github repo that automatically catches 100% of libel? Or even 100% of child porn (or I guess actual porn as a proxy for that problem)? Removing libel and other illegal content without "primary moderation action"s that are based on "human input" is not currently possible.
(BTW: that's NOT what Hawley's bill does! It allows human moderation, you just have to keep the political appointees happy.)
>> What is the difference between misinformation and libel?
> Actual malice? If the standard works for newspapers, why can't it work for social media companies?
Because newpapers have a few journalists. Not hundreds of millions of users.
This has to be done arithmetically or it's financially reckless to allow free-form comments at all. If it's so easy to algorithmically identify libel with 100.00% accuracy, go do it!
Given that there are regularly court cases that hinge on whether some statement raised to the level of libel -- cases that even get appealed and where highly trained judges disagree -- I'm willing to bet the problem is AGI-complete. And then some.
> The government has a clear processes to designate foreign and domestic terrorist organizations.  Let the actual politicians engage in that political fight. Social media companies can use the result.
> Content should not be removed or suppressed based on any political preference or designation
So politicians get to define what terrorism means and companies should suck it up and implement whatever the politicians in power decide.
So, if some powerful GOP senator designates BLM a terrorist organization, and social media companies then remove all BLM content, is that not "removing or suppressing based on political preference"? What about pro-2A militias? What about QAnon?
By the way, what about "illicit content"? If some hard core right-winger takes over Twitter tomorrow, can they ban pictures of homosexuals kissing as "illicit content"?
Hawley -- whose bill doesn't even do what you suggest -- is just shifting power over content moderation decisions from companies to political appointees. That's all. It's not neutral, it is based on human input, and it's primarily just a shift in decision making power.
Dressing this up as "neutral" is obvious bullshit. Hawley wants Twitter to understand that his political party is their ultimate master when they choose which speech to amplify on their platform. This is his explicit and openly stated goal. It is about power, not neutrality.
But anyways, this argument is easy to resolve in your favor. You propose not Hawley's bill, but a hypothetical different one where human input can't be a primary consideration. So, you're claiming that a formal specification of the political neturality of an NLP classifier exists. I've build a lot of classifiers, and I don't believe you. Show me the code.
> The moderation practices of a provider of interactive computer services are politically biased if the provider moderates information provided by information content providers in a manner that [...] disproportionately restricts or promotes access to, or the availability of, information from a political party, political candidate, or political viewpoint
That means that any service that chooses to do something like suppress known conspiracy theories is going to fall afoul of the proposed changes.
It is when political appointees are the ones who judge if you've cleared the bar.
The bill would be more palatable to me if they simply dropped the immunity, without any certification process. But then demonstrating bias would require winning civil lawsuits, which requires demonstrating damage suffered by the bias and also convincing 12 members of the jury in a unanimous vote... which is unlikely to happen, I think.
(Addendum: actually, the real point of the bill may be to just say "Facebook/Twitter/Google is biased, and I'm doing something about it!" and ignore any actual chance of it making law or being reasonable. It's not like many people actually read details of bills to understand what it does and doesn't say.)
No. Killing 230 entirely would allow Twitter and Facebook to be as politically biased as they want.
However, if one of their users libels you, then you could sue Facebook in addition to that user.
And if any Facebook user posts child porn, even for a short period of time, relevant parties at Facebook could face criminal charges for distribution.
You couldn't sue Facebook for being politically biased, but Facebook would be responsible for actual crimes that its users commit.
Hawley's bill says "you won't be responsible for the illegal stuff your users do (i.e., you get 230 protections), but only as long as you keep my political appointees happy."
If you're running a start up, how would you feel knowing that if a user uploaded illegal content to your servers, you could be raided in the middle of the night and imprisoned for it?
Only those with billions of dollars to throw at moderation would be able to comply with the law. Everyone else would need to block user content by necessity, or risk having their lives ruined by malicious users.
The net result is that hosting free speech on the internet would be too risky for anyone other than giant corporations. The liability to host users' speech would be far too high for anyone else.
It only makes sense if the user content is the profit-generator and the forum owner ran the numbers and expects to still be profitable even after lawsuits.
So no more hobby forums, YouTube comments (some are good), or internet access in libraries:
>Kathleen R. v. City of Livermore, 87 Cal. App. 4th 684, 692 (2001).
The California Court of Appeal upheld the immunity of a city from claims of waste of public funds, nuisance, premises liability, and denial of substantive due process. The plaintiff's child downloaded pornography from a public library's computers, which did not restrict access to minors. The court found the library was not responsible for the content of the internet and explicitly found that section 230(c)(1) immunity covers governmental entities and taxpayer causes of action.
Perhaps you should use karma and comment interactions to automatically attest the people you interact with. Add a "report" button to disavow certain users. Now there is a positive and negative feedback loop to reduce the workload of attestation.
Caveat: attestation must be stabilized. The existing hierarchies of admin/(super-)moderator work well as trusted posters. On the other hand, picking and choosing your moderator(s) is interesting and will birth new flame-wars and division.
Caveat (2): Adding more crypto explodes the amount of data which must be handled. Especially when every comment and upvote is signed.
For example, suppose you're an online service Twitbook used by a vast swathe of the world to communicate, and you decide that you want to allow calls to murder politicians you dislike but not (obviously) ones you like. Section 230 gives you pretty good protection from liability over your decisions as to which political figures get threatened with murder. Probably even if one of your users gets inspired and puts a bullet in the head of someone you'd like to see dead.
Or suppose you've got a nice legalized extortion racket seeking out negative claims about people or businesses, getting them to rank highly in Google, not allowing the original posters to remove them, and demanding money from the targets to take them down. Section 230 offers pretty much ironclad protection for your business model by making it nearly impossible to get a court order forcing you to take the content down, meaning you can ensure the only way to make it go away is to pay up, and you can even literally call the fee a charge to remove libellous or defamatory content and there's not a damn thing the court system will do about it. There's a long-running website Ripoff Report that has this as their business model, and they've won every case trying to get them to remove defamatory content without paying them money for the privilege thanks to Section 230. There's also plenty of imitators going after individuals, seeking out (say) claims they've cheated on their partners and charging money to remove them - again, solidly protected by Section 230.
That's not true on 2 fronts. First, Section 230 requires good faith. That would almost certainly fail to pass the good faith muster, assuming they can demonstrate that it was done intentionally. So civilly, they would likely still be liable. In addition, Section 230 has no bearing on criminal law (it's specifically called out in subsection e). So in the event someone was killed, there would likely be a host of people from Twitter facing charges for being complicit in the death. They are effectively Charles Manson in this scenario, and I think they would have a hard time arguing that selectively filtering messages to expose users to messages encouraging them to kill someone does not count as speech.
I don't see why the second is a terrible issue. They're effectively a tabloid at that point, well known for spreading libelous content. I would be surprised if Queen Elizabeth is overly concerned that the tabloids say she's a lizard person. And also, RipoffReport is the wrong person to sue here, which is why that isn't working. If the content is libelous and you want it taken down, sue the person who wrote it, and have the judge issue a takedown order to Ripoff Report. Section 230 only protects them from civil liability, it doesn't make them immune to takedown requests.
In a funny idea, I wonder if you could upload a copyrighted image and then file a DMCA request against the page and have it delisted by Google. Their terms say you grant them a copyright, but if you upload a work that you don't own the copyright for you can't give them a copyright. Technically you're violating DMCA for the upload, and again by lying on the DMCA form you fill out (since you have to own the copyright) but as long as you pick something nobody is likely to sue you for, it should be fine (copy the credits from a book or something). Or if you want to get clever, you could have a friend make a painting of a stick figure in Paint and slap a copyright logo on it, then upload it and have your friend file the DMCA complaint. For bonus points, do it to every single page. They aren't liable because of section 230, but you could probably still force them to play a game of whack-a-mole with Google.
A child trafficker would place an ad featuring an image of child sexual abuse, with wording that gave coded hints that this was a child. "Amber Alert!"
Backpage would strip out that coded language and run the ad, with the image of child sexual abuse.
Sometimes those children would, after they'd been rescued, recognise themselves in the ads and ask backpage to take the ads down. Backpage refused.
An easy fix is to say that an "information content provider" must be a legal person who is liable for their content. Then it's easy to find where the buck stops for a Tweet or a Rip-off Report or a Revenge Porn.
Section 230 isn't in the Bill of Rights. It's a legislative gift that was given to internet companies to help them grow by granting them a special legal shield for all the highly problematic content that they host and monetize.
If they want to editorialize on the back of that content, then I don't see why they should have such special status.
We do not need to eliminate Section 230. But the definition of 'Good Samaritan' blocking and 'good faith efforts' should explicitly not include the editorial decisions of a publisher.
No, it was given to them to encourage them to engage in "good-faith" censorship of things that the government doesn't like by not making such censorship move them from the relatively weak distributor-liability regime where they were only liable based on a responsibility to stop distributing content once they had notice of its unlawfulness to the any-oversight-is-on-you publisher liability regime.
It wasn't to "help them grow", which it was assumed they would do anyway, it was to encourage them to try to restrict "bad" content as they grew (it is the one surviving part of the internet censorship Communications Decency Act, and like the rest of that act was, in fact, directed at promoting internet censorship.)
I do not think the government should be intervening on content decisions. I think;
(1) publishers & platforms should be legally responsible for content they host if they are going to editorialize on it
(2) from a 'net neutrality' standpoint, that utilities and platforms should be mostly blind and entirely blameless for the packets they carry, and
(3) we should allow some level of packet and/or content classification in the middle of #1 and #2 without making the utility/platform fully liable for the packets/content they are carrying, if that classification is based on fairly protecting the network/platform from "attack".
The only reason I can see for a fairness doctrine would be based on a theory of anti-trust. To the extent that Twitter, Facebook, Apple, Google, etc. are monopolies, their ability to censor non-obscene viewpoints on their platforms should be limited... and that's a spectrum not a binary switch.
Personally, I don't have a problem with publishers engaging in publishing. So I wouldn't support that.
But there is a large, philosophical difference, IMO, between a publisher and a platform.
And our laws need to be changed to further clarify this.
Sure. They have those right, just like a newspaper has those rights.
But these companies and newspapers are also liable for their speech.
And the companies that are not liable for the speech, are companies like phone companies.
Phone companies are required to follow certain restrictions, and the courts have found them to be perfectly legal.
> What is the large, philosophical difference that separates them?
Take the phone network, as an example. The courts already treated the phone network, differently than they do a newspaper.
what happens when I want to run CatTalk.com and, and its against the rules to talk about Dogs, and someone comes in talking about Dogs? Shouldnt you have the ability to run and moderate and host the content on your site that you decide?
That means you have a responsibility for that content; the same legal liability that newspapers and magazines have when publishing articles and editorials.
Maybe you prescreen every comment but you make a bad call and something defamatory gets through. Or maybe someone sues you under bad faith and you have to pay to defend it or settle.
This is a very bad future for the internet. It's an internet where the powerful, who have the full support of publishers, will get to have their voices heard loudly. And the less powerful, who do not have teams of lawyers willing to fight on their behalf, will have a very hard time getting their voices heard.
No, that’s only the standard I want to apply if they are editorializing the content that’s being posted.
Is that the world you want?
It's hard to see how you could run a site that provided near real time broadcast communication to the general public if you had to do that level of vetting of each post to make sure nothing slipped through that might get you sued.
But that's the thing. Section 230 doesnt really have anything to do with moderation. In fact, it allows websites that has user generated content to exist without moderation.
I think/hope we agree that its completely reasonable and normal for websites to moderate the content on their sites. For example, I want Twitter to take down child pornography posted. Literally hate speech - calls for genecides and violence against people - websites should have the legal ability to remove that from their sites. I do not think there should be any consequences towards the websites for wanting to remove this content.
If you remove 230 protections from websites, it forces the sites that are able to survivce to moderate more by making them legally liable for the content published.
For example, one could post about drug deals or something in a thread that mods might not read, then hold them legally responsible for enabling drug deals on their site.
I, for one, appreciate the current status quo where I generally don't have to deal with neo-Nazis spouting nonsense uncontested on my Facebook wall (and if that arrangement bothers me, I could go to some other site).
I'm happy to do so. I'm also happy the platform has some base-level standards so I don't have to block and contest quite so much.
> Social media companies lean left, the government should ensure they remain neutral by following the First Amendment.
Is there anything in particular about social media, as a technology, that causes all companies engaging in it to "lean left?" If not, this is a problem the market can solve.
Doesn't matter if the government itself posted that article. Doxxing info is immoral and should be blocked.
What about more nuanced cases? Who will be arbiter of "neutral" then?
That's true. Like much of the Constitution, the text is pithy and doesn't specify its definitions. The current legal interpretation of the First Amendment and free speech rest on a particular philosophical traditions that the court adopted in the last century, and especially after the 60s.
A lot of people have absolute faith in the functioning of a "marketplace of ideas," but it's not at all clear to me that a such a market can work well when it's flooded by disinformation, just like a market of goods can't work well when it's flooded by counterfeits.
I also think this would result in a mad dash for anonymous, distributed, decentralized communication methods, i.e. things that can't be the target of a subpoena.
Given the toxic influence of both social media on society, and the severe centralization we're operating under... both of those things look very tempting.
Goodbye Mastodon (every Mastodon instance is liable for all toots). Goodbye Internet Archive (can't host content that might be defamatory, they'll be liable). Goodbye GitHub pull requests (Git would be liable for any defamation contained in them). And so on.
It's not that I don't understand the value of those things, its that I see the value of not having information in society be controlled by a few companies as having far larger value.
The only people able to blog, for instance, would be people who have the technical chops to completely self-host. Everyone else would be reduced to handing out flyers on the corner like the bad old days.
The behemoth old-media companies would be fine, because they can afford lawyers to go over everything they publish.
Not only would it not be worth it, repealing section 230 would consolidate behemoth media companies' control, not break it. It would do the absolute opposite of what you want.
They don't have special status for their editorializing. If Facebook or Twitter or any other interactive computer service produces editorial content that, say, libels someone, they could be sued over that and would not have a section 230 defense.
Given that all are judgement calls, I'd say it's impossible for it to not be. There's a difference between merely removing something and giving your opinion on the contents.
That rules could include biases like "no news that favor Trump reelection", but they should be in their terms of service explicitly.
I'm open to hearing the opposite side if anyone has any arguments on why it shouldn't change
I fail to see how moderators on vBulletin boards in 2002 are any different then moderators/admins/algorithms on Twitter, Facebook, YouTube, etc in 2020. The scale is different, sure, but you are not entitled to the amplification of these platforms just because they are bigger the same way you weren't entitled to the amplification on those old vBulletin board systems.
I generally support 230. There may be some tweaks that could be made that I would support, but the general concept behind 230 is correct.
> Are private entities responsible for the content they host?
Generally no. Private entities are responsible for the content that they publish, not for the content that they host. An example being the comment section on a news site versus the news article on a news site. The latter is content that they published, the former is content that they host.
> If we repeal 230 and someone posts something falsely defaming me, who do I sue?
IANAL but I would imagine if 230 is repealed and some one libels you, you sue the platform and the content creator.
If 230 isn't repealed you sue the content creator and ask for an injunction against the platform to remove the content.
That's what the law was before 230, except that it essentially applied to illegal content, as well, because of the company tried to actively moderate even for just illegal content then it became liable for all content (it had to be 100% right if it tried to moderate illegal content.)
Being liable for only known illegal content is distributor, not publisher, liability, which is what it superficially looks like 230 does on its face (courts have applied 230 to provide no liability, not even distributor liability; expressly imposing distributor-style liability would be a modest reform.)
Because the NYTimes pays staff to create it's content and exercises complete editorial control over said content, and thus is fully liable for the content published as news articles on its site. The NYTimes however is not liable for the content in the comments section on said news articles, though, they are allowed to remove comments that they do not wish to appear on their commenting platform.
Twitter is also liable for content that it publishes, eg: what the Twitter support account posts. But it is not liable for the content that I, dlp211, post to Twitter. Twitter still retains the right to moderate their platform as they see fit.
This applies to not just Twitter, but to every platform on the internet, from the Twitters and Facebooks and YouTubes to the MyNicheVBulletinBoard to Palor, 8Chan, and 4Chan.
Now I have mentioned previously that I am open to tweaks. While I haven't thought deeply about it, I would be open to considering a tweak along the lines that the platform becomes a publisher when it promotes content via human or computer decision making. This may or may not be a good idea after I think about it more deeply and discuss it with others, but the point is that I am not of the opinion that 230 is the end all be all but I am also of the opinion that I would rather live in a world with 230 than without it. I say that as someone that believes that their politics would be greatly benefitted by the repeal of 230.
IANL even if they called themselves a platform they would still be acting as a publisher and the law would treat them as such. For example they would be editing and curating content, paying for content from writers, and making the material available under their own name.
I don't see how that explains why it's "abuse" for a company to selectively remove content from their website, however.
Should Hacker News be required to treat all links the same? If not, exactly how do you think a government "neutrality" mandate would work?
Because such a mandate means that if the HN moderators are a little biased, the whole shebang would become liable for any defamatory comments posted here. That's a government-mandated sword of Damocles hanging over every single moderation decision made here.
You would sue the person who defamed you today.
Section 230 makes sites like Facebook, Twitter, Reddit, Youtube, Instagram and even Hacker News possible. Revoking Section 230 could expose those platforms to the possibility of liability for content posted on them. This might cause a re-shaping of the Internet in general.
Part of me seriously wonders - would that necessarily be a bad thing? I am not convinced, by any stretch of the imagination, that these social media opinion aggregation platforms are universally positive. Everyone keeps acting like the existence of Facebook somehow democratizes content publishing for the masses, even when we are faced with clear evidence that this isn't the case. The centralized nature of Facebook actually allows for larger scale manipulation of the narrative.
And how would this affect Uber, Airbnb, Amazon, Netflix and other sites? I suppose opening them up to liability for negative reviews could be a problem.
I'm thinking on the fly here, but if Facebook just disappeared off of the Internet tomorrow - I'm not really sure I would mourn that. And if new Internet companies were burdened with stricter moderation requirements (or the need to stand behind every piece of content posted onto their site), maybe that would actually be good? Maybe that would drive people to create their own websites once again.
I'm sure I haven't thought deeply enough on this but I definitely feel the tide here is a knee-jerk protection of Section 230. Yet the companies it protects the most are the ones I feel are the worst.
Facebook et al. could be big enough to wrangle the regulatory burden of existing without these protections. But many proposed 230 "reforms" could scare off anyone smaller, creating a regulatory moat that keeps Facebook at the top in perpetuity.
You may suggest that would prevent me from creating a user-generated-content application of massive scale without the resources to sufficiently moderate it. And again, yes - maybe that should be a requirement of me doing such a thing.
It isn't like the only kind of business a person could create on the internet is one that surrounds the aggregation of user generated content. If it killed that entire class of business ... I am not sure we would lose very much of value that couldn't be replaced by individuals hosting their own content.
Of even tiny, microscopic scale. Even your personal blog's comments, or the forum for your local club.
And it _still_ won't stop Facebook. It'll only stop _you_. That outcome doesn't sound like the outcome you're saying you may be comfortable with. It sounds like the opposite.
Oh no, I'm considering exactly that. I am saying: if my tiny personal blog has a comment section I would be liable for comments posted there. If that is a burden I can't handle then I should turn off comments. At least in my experience those comment sections are a complete waste of space anyway and the trend I've noticed from the large blogs that are still around (e.g. daringfireball, kottke) is that their comment sections are long gone anyway.
What I am pondering is: would this ruin the Internet? If I couldn't host a public forum if it got beyond my limited means to moderate? If I couldn't have a public comment section? It doesn't seem clear to me the Internet breaks if I am forced to own the responsibility for those things that I allow to be made public through sites I control.
It would break most of how the internet works today. It would prevent most forms of real time, one to many, communication since a human would first have to moderate it.
Best example is public audio/video conferencing. I watched a conference presentation in real time today, which had real time comments in IRC and an audio/video conference question and answer. Neither of which would have been possible if real time moderation was required.
How would moderation even work for audio/video conferences? As far as I can tell it would not work since no moderation could happen in real time and allow for smooth audio/video conferencing. What if we act as a platform though and claim no responsibility for the content that happens? Then there is no ability to set topics or restrict offensive material etc so any random person(or bot since how would you tell the difference) could streaming in offensive videos or loud noise
Loosing most, one to many, real time communication would be throwing out a considerable amount of value.
Do you have solutions for issues like that?
edit clarifying that I am taling about one to many real time communication.
This is already an issue on platforms like Twitch. All Twitch streamers are required by the Twitch ToS to moderate their chats and the streamers face bans if they fail to do so.
Everyone keeps talking about "breaking" the Internet but let's consider what would actually change. Let's say that I am unable to moderate my chat because it is overrun with malicious actors. What are my options? I can completely turn off chat for one. I could restrict chat to a manageable vetted subset of chatters that I am comfortable allowing to post with minimal moderation.
In fact, as a streamer I cannot possibly read hundreds or thousands of messages per second. At that point the very idea there is "real time communication" going on is a myth anyway. Every streamer has a way of limiting this deluge of input and moderation is how they are handling it.
> Loosing most, one to many, real time communication would be throwing out a considerable amount of value.
This is where I feel everyone here is taking things too far. You don't lose communication - you become responsible for the communication you allow to be made pubic.
Currently twitch is not held legally responsible if the streamers fail to follow ToS due to Section 230. If Section 230 was dropped however asking the streamers to moderate would not protect them from legal consequences.
I do not see how twitch's current rules would provide them legal protection or provide a solution for moderation of one to many realtime communication if Section 230 was dropped.
Maybe you have some assumptions about how new laws would be put in place post Section 230 that make this work?
> This is where I feel everyone here is taking things too far. You don't lose communication - you become responsible for the communication you allow to be made pubic.
It increases the barrier to entry so many, if not most, forms of one to many real time communication would have to be discontinued due to lack of the ability to moderate in real time. Hence they would be lost along with the value they generate.
I don't have to have assumptions, I know of real life examples. Not long ago I worked for a company that had a 24 hour live news broadcast. How do you think they handle this?
One specific example I can recall is closed captioning. These are federally required on all TV broadcast channels, including live. The company had a contract for a third party to manually transcribe the broadcast in real time. One initiative we wanted to explore was automating this process using speech recognition software. This was difficult because it turns out that incorrectly transcribed closed captions can lead to lawsuits. So the company that was contracted to handle the closed captions also provided insurance/indemnification against their service causing any lawsuits. No AI speech recognition solutions that were available also included this insurance so it was deemed too risky to switch.
One of the underlying assumptions in all of this Section 230 talk is "it's too hard to moderate the deluge of user generated content" .... so why even bother trying I guess? Why is that the underlying assumption? Why isn't the assumption: You can publish as much user generated content as you are capable of adequately moderating? The idea that the current free-for-all is some inherent "right" is perplexing to me.
> so many, if not most, forms of one to many real time communication would have to be discontinued due to lack of the ability to moderate in real time. Hence they would be lost along with the value they generate.
Again, that doesn't seem like the necessary conclusion. It prevents centralized platforms from publishing massive deluges of unvetted content. They are not in a position to moderate the billions of videos, chat posts, images uploaded each day. So, maybe instead of leaving the free pass we've given them open we question: is it reasonable for single entities to be the sole publisher of billions of unmoderated pieces of content? That doesn't mean all content everywhere goes away. It creates limits on what sole entities can accomplish.
You example does not address my original question in:
"How would moderation even work for audio/video conferences?" "Do you have solutions for issues like that?" In your example there must be a delay in broadcasting, for the realtime captioning to happen in. That sort of delay precludes a real time back and forth conversation that is expected from audio/video conferencing.
Secondly I was also asking for a solution that would allow small time actors, like in my example of the conference I attended, to continue to be able to host such conferences with out undue liability or massive investment.
> One of the underlying assumptions in all of this Section 230 talk is "it's too hard to moderate the deluge of user generated content" .... so why even bother trying I guess?
I have not seen that be the main thrust of any argument. To my knowledge Section 230 was in part made to make it easier for more moderation to take place since before some had assumed any level of moderation would induce legal liability, both criminal and finical, for any content that made it past moderation.
> Again, that doesn't seem like the necessary conclusion. It prevents centralized platforms from publishing massive deluges of unvetted content.
As noted above your example above does not apply to my original question an example of a small time actor using audio/video conferencing so I remain unconvinced.
We'll have to agree to disagree then. For example, I am taking an online course right now. There is a chat room where people post questions and there is a moderator that elects individuals to speak. There is a private forum where people can post questions. I absolutely expect them to moderate that content to avoid any slanderous statements.
What happens now if someone call into a radio station live call-in program and starts spouting nonsense? Same thing that would happen on the Internet. Let's sketch it out. I don't have money/time to do any moderation at all (or pre-vetting of on air guests) -- Well then I better not do it at all or I better be responsible for the consequences if a nut job gets on. I cut him off ASAP and do my best and what happens? Does a swat team descend and smash down my door? No, maybe I get a cease and desist and nothing more happens, maybe I get filed against in court, maybe I have to defend myself. Maybe I pay for liability insurance (as I mentioned happens in many other similar circumstances). Maybe I am legally compelled to remove from my servers any recorded content related to the incident. Maybe I have to pay damages for the time it was up.
This idea that private real-time conferences like online classes will be impossible in such a circumstance is the hyperbole I am starting to detest. Would there be changes: hopefully yes. And my belief is theses changes would positively effect all online discourse.
> This idea that private real-time conferences like online classes will be impossible in such a circumstance is the hyperbole
I have not used the word impossible or implied it. I have said the change would throw out "a considerable amount of value" and I have talked about value generating venues having to shut down.
I do not think you are understanding what I am saying pull 'impossible' from what I have said.
I take your statement: "would have to be discontinued" as equivalent to impossible. If you'd like to walk back that statement we can continue to discuss.
I have given examples of: Small teams of individuals (e.g. one Twitch streamer + a mod team) broadcasting to tens of thousands of people in real-time including call-in type segments. This happens today and I considered how this would continue if we repeal section 230 (including comparisons to existing public real-time radio call-in segments). I have given examples of medium sized teams presenting classes of 150+ individuals, each of whom can "raise their hand" and be selected by a moderator to take control of the stream to provide their own insights to the group in real time. I considered how this would be possible without Section 230. If neither of those forms of "one to many real time communications" fit your imagination then please be more specific and we can continue.
However, in all of this I still have not heard to my own satisfaction a description of a realistic loss we would have if Section 230 were to be revoked.
Do you actually have the means to moderate your forum and fund the legal trouble you might face should you get it wrong? Are you ready for the bad actors who abuse this bottleneck to take down content they don't like? I don't think many people are positioned to handle these burdens, and I think the internet will be ruined as a result.
>Part of me seriously wonders - would that necessarily be a bad thing? I am not convinced, by any stretch of the imagination, that these social media opinion aggregation platforms are universally positive.
Here's where you are wrong to think this. It doesn't protect social media giants, Hacker News and news websites. It protects literally everyone on the internet in the USA.
Thanks to this law you can't be held liable if you have a blog with a comment section. Anyone can post anything there and you could be at serious risk of legal trouble if someone posted something that breaks the law on your website. Any communal website would either a) move out of the US and b) probably require some very strict controls on who can post and what.
The law doesn't protect American giants, it protects everyone that uses the internet to discuss.
I suggest you read this blog post on the issue, especially this part:
""If you said "Section 230 is a massive gift to big tech!"
Once again, I must inform you that you are very, very wrong. There is nothing in Section 230 that applies solely to big tech. Indeed, it applies to every website on the internet and every user of those websites.""
Even absent a comment section, most blogs are hosted by someone else (e.g. WordPress), who does not vet the content on that blog. That won't be possible anymore in a post-Section 230 world.
No problem, you say, I'll self-host my content on AWS and use CloudFlare as a CDN. But AWS and CF also have no section 230 protections any more either and probably won't do business with you unless you indemnify under some kind of insurance policy (which you won't be able to afford as a small blog).
Even if you run your own server and install it at home, the lack of section 230 protections will probably make your ISP responsible for content you publish (remember: your ISP is not a common carrier - thanks FCC) so you're probably going to find that all consumer ISPs are going to have terms of service that prohibit publication, and technical implementations that enforce that.
I mean, in the non-Internet world, if I want to make a newsletter and mail it out to a subscriber list; I can at least do that. All I need is a laser printer, and a stack of stamped envelopes. The postal service is a common carrier, at least.
Today's internet is a composition of platforms, all of which are really only possible due to the existence of section 230. It blows my mind that people are so blasé about the idea of tossing it out or reworking it in a naive way.
I think this is the kind of knee-jerk hyperbole I want people to really think deeply about.
> Thanks to this law you can't be held liable if you have a blog with a comment section.
Maybe that should not be protected. If I am unable/unwilling to moderate the comment section of a blog I host then maybe I should't have one. I do not believe a completely open free-speech comment section is a requirement of a good or successful blog. Also, there is a business opportunity for those who want comment sections to pay for moderation services.
> I must inform you that you are very, very wrong.
This kind of patronizing is neither useful nor conducive to mature discussion. Aggregate user-generated-content sites aren't some kind of holy thing, forums, comment sections or otherwise. I want people to consider the fact that we may all be fine without them at all. Nothing stops people from posting whatever content they want on their own site. It just discourages aggregation of other people's content.
It's not about having to moderate a completely free speech comment section. It's about being able to even have one and to be able to host good people and good comments without having to be liable for the time when people act terribly.
Where I'm from I don't think a bar owner can be held liable if a patron starts a fight and wounds another patron. You don't open a bar with the explicit intent of it being an amateur boxing arena, but a nice place for people to enjoy drinks and conversation.
If a user online decides to breach trust and common courtesy by posting vile stuff on my site I shouldn't be held liable for their actions as I did not force or coerce them to do it.
>Nothing stops people from posting whatever content they want on their own site. It just discourages aggregation of other people's content.
Yes, nothing is stopping them. But 230 is allowing you to do more with the internet other than just post things on your own site. That blog even describes an instance where replying/forwarding an email - in which you are repeating what rule/law breaking thing another person said to be able to reply/comment on said thing - you are protected against liability.
If you dismiss these liability rules you are effectively removing everything from chatrooms to comment sections from the internet, effectively making the internet into a snailmail/bulletin board service.
Yes, I am pondering exactly this. How much of the Internet do I consider valuable that would be lost? I mean, lost in the sense it would be completely irreplaceable without Section 230 protection. To be fair to my position, the vacuum of unmoderated spaces would be filled one way or another.
Maybe people are surprised by such a position, but I don't actually see enough inherent value in unmoderated comment sections on private blogs or even all of the 1990's era phpbb forums to worry about their loss. In fact, when I consider the negative effects of the massive companies hiding behind Section 230 they really seem to heavily outweigh whatever positive effect comments on my personal blog could ever bring.
It seems these things keep coming up: public comments on personal blogs and anonymous forums. I want people to deeply think about whether or not these things are really valuable and even more so whether or not the are irreplaceable without Section 230.
I completely agree, it’s clear that a successful business model around content moderation cannot exist in the internet’s current form, and there are very few consequences for people who post malicious content.
Would the open source software community survive? My world revolves around github more than any other site; would ticket discussions be forced back to mailing lists? Would mail readers pick that slack up, and re-implement social media over email?
I know this is questioning some fundamental assumptions of most of our moral principals, but I am literally questioning: should they survive? It seems everyone is assuming that they should. Some seem to suggest they should for abstract "free speech at all costs" philosophy. Others are assuming they have some kind of positive effect, either on personal growth or economic activity.
> Would the open source software community survive?
I'm not sure how Section 230 applies to code but I don't think public forums like Facebook, Twitter or Youtube are necessary for open source software to continue (any more than they were necessary for it to start, which happened long before they existed).
Besides, what I'm pondering is the centralized aggregation points. People could still host blogs, they are just directly liable for the content they post (as they likely are now).
That means no user reviews of anything. No user contributed information so no more Wikipedia or OpenStreetMaps. No Wikis of any kind in fact. No hosting of public data sets for ML research. Forums would be a liability nightmare so they would go away.
Not only would current/new instances of user generated content make sites liable but hosting any historical co rent as well. So to avoid liability the web would have to be scraped clean of user generated content.
Facebook and other large sites would be the only ones to survive because they could afford a moderator army. Your extremely short sighted position would basically leave only large content producers. The Internet would regress to the curated Online Service model but worse because user communication would need to be disallowed over heavily moderated.
You're advocating for the Internet to turn into broadcast television. It's sad that you either can't or won't accept that implication.
> Your extremely short sighted position
I am getting this a lot in this comment section so far. I mean, hey we should all be thick-skinned, right? My disagreement with your conclusions or predictions of what would happen doesn't mean I suffer from myopia and it isn't very polite to suggest otherwise.
Central aggregation of user generated content isn't the only possible mechanism to do anything on the Internet. Removing the legal protections for aggregators may slow growth and make it significantly more difficult to centrally aggregate content, but that might actually be a good thing.
As I said and you conveniently ignore, user generated content would have to be scrubbed from the Internet except from the big players like Facebook that can afford an army of moderators. So first order problem is big sites like Facebook would be the only venues of user generated content.
It also creates a slippery slope for ISPs/hosting companies. It could easily be interpreted that an ISP or hosting company is hosting user generated content, they literally are, so they're liable for that content. They would not carry any content that might make them liable for anything so they'll either shut down or moderate their services such individuals have no means to post their own content. ISPs already restrict or block hosting content, they would only get more draconian if they faved legal liability for someone hosting a site.
You can't suggest a course of action that would consolidate power in big sites like Facebook and then opine about people posting content outside of those sites. There would no longer be an "outside" of Facebook because no small player could ever afford the moderation or insurance against liability.
You might dislike Facebook or YouTube and they might be filled with dreck. They might need their own types of regulation but dooming all user generated content because Facebook's management are assholes is not a fix.
Instead of getting offended when people point out your myopia maybe take a step back and apply some critical thought to your suggestions.
I've repeated it ad nauseam but I'll repeat it again - this is just hyperbole.
In what capacity do the majority of Internet users aggregate the content of others? I'm thinking about my own use of the Internet. In what capacity do I own domains or applications where I personally publish the content generated by others?
Let's consider a few possible cases (which do not apply to me). I have a blog with a comment section. Someone posts some comment that leaves me open to legal liability. I have a few options including turning off comments on my blog, moderating all comments on my blog before they go live, paying a third party to handle moderation (and indemnify/insure me against legal liability).
Second, I am passionate about some hobby and wish to create a public forum on a domain I host that allows for the discussion. Malicious participants start to show up that start to post content on this unmoderated forum that open me up to legal liability. Again, I have to deal with this now in some capacity.
Third, I am an entrepreneur with my sights on a startup. This startup is like an Instagram, Pinterest, Reddit, Tumbler/Blogger, Medium, Quora, Yahoo Answers, Stack Overflow etc. I am concerned that malicious actors will use my new venture to publish malicious content. Right now, I don't care I just build it without any worry. Without Section 230 I have to seriously think about how I ensure content is moderated.
I'm just not seeing the Internet break in any of these scenarios. I'm not seeing the Internet being scrubbed of content. In fact, as far as I can tell all of the above happens already to some degree. Do you expect defamatory content on stack overflow? Or do you expect it to be removed?
For any criminal material that makes it through the 3rd party moderation the publisher will be criminal liable. The third party can pay for the publishers lawyer to defend the publisher in court, but if the publisher is found guilty they will be the one in jail not the third party moderation service.
Ensuring the above does not happen is in part why Section 230 was put in place, at least to my understanding.
Your argument here does not seem to consider criminal liability does that change your outlook or was it already incorporated in your viewpoint somehow?
You're also far too focused on sites you seem to dislike. Making hosts liable for user content will also affect every single industry forum, mailing list, or chat system. A flame war on a Linux distros mailing list could very easily get that distro sued out of existence just defending itself. Even an innocent error on a wiki could open up a bunch of volunteers to legal liability.
The core problem that you're ignoring or not seeing is user content doesn't need to be libelous or illegal to end up in court. There's legal trolls that sue people for stupid or frivolous shit all the time. Simply defending yourself costs money which is something a Linux distro or a fan maintained wiki don't tend to have in abundance. It's hard enough to sue some mailing list member or wiki contributor that it tends to only happen with legitimate issues. But if the bar is lowered that hosts become liable for user content the legal trolls will descend. It's not just the legal trolls with civil suits, there will be plenty of DAs and AGs looking for easy wins (to score political points) that will go after sites for stupid reasons.
Sites that can't afford some moderation service or liability insurance will just avoid user generated content. They'll also remove existing content, you know - scrub it from their site, because there's no way of knowing it won't attract a lawsuit. You assume defamatory content gets tagged by the poster as "#defamatory" and it's then immediately obvious.
Stack Overflow might take down an obvious troll post but what about a post pointing out a bug in a software product? If some developer became sufficiently upset they could sue SO because someone pointed out a major bug in their software. A completely above board discussion of a bug could easily be seen by the developer as defamatory. Just responding to a suit would cost SO money let alone actually defending themselves. There's a wide gulf between dealing with obviously offensive/malicious content and being under constant threat of legal action no matter how good your moderation.
If you don't publish anyone else's content you're fine. I don't give a shit about you. Maybe your personally published content is worthwhile, maybe it's complete shit. I don't know or care. But I do know there's some YouTube channels I really enjoy that wouldn't exist without YouTube as a platform. I have also benefited a great deal from Wikipedia among several other wikis that would not be able to operate if they were constantly threatened in court. I've definitely benefitted from product reviews, restaurant reviews, and OpenStreetMap contributions. All of that content I know has been worthwhile and I would much rather it exist and the platforms that enable it to exist.
> The needless extra liability either breaks the business model
We can agree, I hope, that if I had a shared image host then I should at least pay the cost to ensure no child pornography, snuff or whatever other common horrible images we can agree on are removed. I hope we can both agree that such expense might be impossible for some business models but that such expense isn't needless. We can probably also agree that Section 230 likely won't protect me from 3 letter government agencies insisting I remove classified content, even if my own morals would allow such content.
So yes, I'm asking for /extra/ liability but we can disagree on what is or is not needless.
> You're also far too focused on sites you seem to dislike
It may seem that way since I am arguing that Section 230 may be the seed from which they grew. I'm arguing that allowing single entities to re-publish volumes of content beyond their means to moderate may be bad at it's core. Perhaps we should limit everyones ability to post unlimited and unmoderated content. That includes me. So I can't just put a blank billboard in front of my house, allow anyone to write any slanderous thing on it and then shrug and say "Section 230" when the neighbors complain.
On the topic of Stack Overflow, I wonder if they have taken down clearly false and libelous claims. Same with Wikipedia. I doubt either have a clean record either way.
> But I do know there's some YouTube channels I really enjoy that wouldn't exist without YouTube as a platform.
I want to take the time to descend into my own hyperbole just for rhetorical effect. Lots of the world was made better in small ways by tremendously horrific practices. I love Youtube and I watch it every single day. Careers have been born on it and a small number of millionaires. Does that mean that a single company controlling something like 90% of the personally created videos on the Internet is a good thing? For every Youtuber you like, how many of sufficient value have been buried by Youtube's algorithm?
Have you ever read the "Wikipedia has cancer" post ? When you really look deeply at what we think we are protecting ... are you sure it is what you think it is?
I feel like I'm taking crazy pills as the nerds of the world seem to be cheering on the bullies as they steal, repackage and profit off of the user generated content of others. And when someone suggests that as a price they should at least be held responsible for the worst of the content they republish then everyone acts shocked, like how can these billionaires possibly manage all that.
What I'm saying is: if they can't manage it then they should stop. And if you can't figure out how to do it then you shouldn't even start.
Do you have an alternative that could survive if user-generated content is made legally risky to host and is subject to the whims of moderators on the few sites/publishers that can afford to bear those risks?
I say yes. Those forums of yesteryear were really good examples of where the internet can shine. The evolution of spammers made self-hosting prohibitively difficult, and the big fish grew fast and swallowed the market. It's a shame, and that's a reason that I'm generally in favor of breaking up the biggest players (though, I haven't seen a specific proposal that I'm in favor of)
> > Would the open source software community survive?
> I'm not sure how Section 230 applies to code but I don't think public forums like...
You seemed to miss the thrust of my comment about github. Github isn't just a repository of code, it's also a public forum! The ability to file and discuss bugs out in the open is a feature that would be sorely missed -- I've gotten two bug reports this week from previously-unknown users. That wasn't common back in the mailing list days, and I'm really happy that the bar to bug reporting is lower.
But to address the specific concern you brought up, removing Section 230 wouldn't prevent someone submitting bugs/issues. It would just force the moderation of those posts before they were made publicly viewable. For small projects that receive 2 or 3 bug reports a week I doubt that would be the massive issue everyone here is wringing their hands over. It becomes a problem with scale - like 1000+ issues per day on a project ran by a single developer. But to be fair to my position - could such a developer even deal with that volume of issues even if the default behaviour was to make all posts public?
I grant that moderation slows discussion. For example, if you were asleep and someone posted an issue then before you even had a chance to moderate some other non-admin user might answer the question. Maybe we lose that. More likely we find a way to work around it.
I get your philosophical stance, and in some ways, I think it's healthy to question the fundamental assumptions about whether certain services should exist. But I also think you are willfully ignoring second and third order effects to continue this thought experiment where people are routinely showing that these secondary and* tertiary effects will be crippling to more than just social media companies. And again, social media companies will not go away with 230 repeal; they are some of the only entities with enough capital to handle increased litigation costs as a result of 230 repeal.
How is that different from right now? Changing the default visibility of posted content from public to private doesn't change the ability of anyone to spam anything. You either have to clear out your moderation queue OR clear out your publicly visible forum after the fact. And isn't that better? By forcing moderation you are preventing that horrific content from being visible while you were asleep overnight. Right now the forum owner can just shrug it off "oops, I was asleep, not my problem". Without Section 230 that might not be permissible.
> you are willfully ignoring second and third order effects to continue this thought experiment where people are routinely showing that these secondary and* tertiary effects will be crippling to more than just social media companies.
In what ways has anyone show any secondary or tertiary effect that would be crippling? I would avoid allowing public comments on my blog? I would avoid creating a public discussion forum? I fail to see this as crippling.
I think people are vastly overestimating the impact of re-publishing the content of others on their personal lives. Yes, it could cripple some potential businesses. But my point is: should those businesses exist? Let's really think about exactly what we are giving up, not this fear mongering "destroys the Internet". Let's be specific. What would we lose that cannot be replaced?
As for second and third order effects, again, we have evidence from the past year of what will occur. The crackdown on personal ads, Tumblr, etc. all came as a result of changes to this law through FOSTA-SESTA. Those sites weren't all in violation of the law; they just made a determination that they cannot reasonably risk having to litigate what were previously not edge cases with the budgets they have.
The result? A lot of Tumblr and other traffic ended up on...Twitter. And why? Because Twitter, unlike other smaller entities, can weather litigation and regulatory costs much better relative to smaller competitors. Far from crippling the most egregious actors, it actually EMPOWERED them.
What you are talking about are just first order effects when you pontificate about people not just allowing things on a social media site. But as illustrated above, this will impact far more than just someone allowing comments or not. It can fundamentally reorient dynamics for all sites that allow for user content to be posted, and there are very strong likelihoods that it will lead to a greater concentration of power in incumbent social media companies, exacerbating the very issues you are most concerned about.
Section 230 prevents github from being sued over a user posting a malicious PR to my repo. Successful moderation, in that case, would require github to employ people who are familiar enough with my code to understand the impact of the PR. This is completely untenable.
Assessing legal risk of user-generated content is a financial barrier that only the companies you feel are the worst will be able to overcome. We're discussing law in the comments here like we're all lawyers but let's face it--very few of us are up to the task of determining what is and is not illegal, and even fewer of us could actually survive if that assessment was challenged. User generated content ends up having a massive upfront legal cost, and I predict it will become extinct (both future and retroactive) for US-based sites if Section 230 is repealed...
> The centralized nature of Facebook actually allows for larger scale manipulation of the narrative.
...except on sites like Facebook, who make unimaginable amounts of money and can likely afford to fund private development of automoderation software and can weather the storm of lawsuits for content that manages to evade the filter. Facebook will only become more centralized as other online communication platforms are unable to bear the costs of publishing user generated content, and their control over the narrative will increase.
> Maybe that would drive people to create their own websites once again.
Sure, but how are people going to find these websites if I'm effectively reliant on the tech giants to tell people about it? Do you trust Facebook to not start censoring links to external websites? If there's no Section 230, then they could easily justify censoring off-site linking by saying they can't moderate the content of uncontrolled sites. How is Google going to exist if it's liable for what it links to? How are content aggregators going to exist? Forums? Chat rooms?
> Facebook just disappeared off of the Internet tomorrow - I'm not really sure I would mourn that
Same, but Facebook's not going anywhere. It'll just start charging its billions of users directly and continue telling me what I can and cannot read according to the whims of people I don't know and have no influence over. Meanwhile, all of my other options for discussion will slowly start disappearing as it becomes too costly to continue operating. There are better options than allowing that future to happen.
Even worse, we are discussing the future social effects of changes to law as if we were psychics. Not even lawyers or the best judges could claim to do that correctly. The default position seems to be "revoking Section 230 will ruin the Internet". I'm honestly trying to see how and I just don't see it. It would change the Internet and it would make certain classes of business more difficult.
> Facebook will only become more centralized as other online communication platforms are unable to bear the costs of publishing user generated content
I don't see a substantial change in Facebook's position either way. Is everyone still waiting for Mastodon to usurp it? Or maybe we dream of some young, ethical startup to win the hearts and minds of the globe and show us all how to be benevolent in this space? The idea that Section 230 somehow helps create the conditions that gets us out of the mess we are in is a pipe dream. I would love someone to sketch me out a plan, based around the legal protection around aggregating user generated content provided by the current laws, that slays the beast of Facebook.
> Do you trust Facebook to not start censoring links to external websites?
Not anymore or less than I trust Facebook to show my posts on anyone else's feed. In a world where there is Gigabytes of content generated each day, more content than any human could possibly digest, Facebook necessarily shows you some slice of it. The fact that we don't hold them accountable for the slice they choose is frankly crazy to me.
> How are content aggregators going to exist? Forums? Chat rooms?
Should /unmoderated/ content aggregators, forums and chat rooms exit? This is a fundamental question which I am scratching at.
It seems people are making a motte and bailey argument here. They seem to suggest content aggregation couldn't exist without Section 230. This is hyperbole. It would mean the platforms that aggregate user generated content would be forced to strictly moderate it or face legal trouble.
> How is Google going to exist if it's liable for what it links to?
That brings us firmly in the territory of law that I am not familiar with. I know there are cases where the question of publishing links and how that relates to content and/or copyright is grey. However, what liability I would face if I were to post a link on my own blog to someone else's content is something I have no knowledge of.
> Meanwhile, all of my other options for discussion will slowly start disappearing as it becomes too costly to continue operating.
I'm not sure this is necessarily true. Outside of reddit and hacker news I can't think of any other space I even bother posting anything. The majority of my meaningful communication is done 1-on-1.
We're treating a specific class of communication as sacrosanct. Not even: I'm free to say what I want. But rather, I'm free to create open public spaces where anyone else can post anything they want. We're talking about a very specific kind of thing and I'm unsure if that specific thing is worth having at all.
If Facebook, Twitter, and Reddit all disappeared tomorrow it would be an enormous win for American citizens. The path to unification and healing does not run through big tech.
And yes, I understand that it would also be the end of HN. I accept that.
I remember how tought it was to moderate IRC channels that were larger than a certain amount of users. Imagine having to wrangle HUNDREDS OF MILLIONS of users by trying to outmaneuver all the bad faith, harmful actors.
I'd rather live in a world where people can create websites and moderate them as they wish, since the alternative is probably no website at all since you are bound to run into bad faith actors in life.
As long as we can still freely create websites online there should not be people who are against moderation.
For me, the problematic/key question and example is Facebook's (News) Feed. When content is collected, curated (algorithmically) with specific intent, published/presented in a particular order and layout to communicate and derive revenue, at what point is it a creative work with authorship?
If I prompt 100 people to comment on a topic by placing information in front of them, and then take portions of those comments, reorder and present them to you shaped by an overarching narrative of "what you may find interesting related to this topic" and place it on the front page of my website, in what way is this different than a newspaper?
A newspaper can be sued for defamation, however, Section 230 (c)(1) shields Facebook from any liability in the case where this selective curation and display of information contains known falsehoods or defamations. If any reasonable curator of facts (reporter) or newspaper editorial board would identify and reject these falsehoods or otherwise be sued, does Facebook get a pass because it was a computer curating?
*Edit: The reason I think this may be problematic is that it removes any check on purposeful misinformation that has traditionally existed on our previous methods of speech amplification (newspapers, tv, radio). Facebook has no incentive not to publish the most engaging information even if it is false, as it cannot be sued. If it could be, you would see it actively prevent misinformation. The standard would be what the courts would find it responsible for under existing libel laws, which is a difficult bar to clear, particularly for public persons, but is the only restraint on yellow journalism we've traditionally had.
Even if US law permits them to act as they will, it is dangerous for our society to have organizations that are essentially utilities provide a non-neutral platform. It doesn't make a difference if a private company is censoring you - the distinction is just cosmetic. The impact is as real as a government censoring you, since any alternative avenue of speech is significantly less effective and for most intents and purposes, simply doesn't exist.
Being big is not the same as being a monopoly. McDonald's is big, but they are not a monopoly because they have a lot of competitors. Regulating Facebook or Twitter as utilities would be as dumb as regulating Mcdonald's as if it were a utility.
If you make a habit of shitting in the dining rooms, you'll likely find yourself banned from all the restaurants eventually.
Sorry, was that Twitter? Or Facebook?
It's kinda hard to argue it's a monopoly when I can't figure out which one you're referring to and you're saying "Twitter and Facebook".
Meanwhile, this is why the USPS isn't a great comparison: https://en.wikipedia.org/wiki/Private_Express_Statutes
1. A government organization
2. An essential service
3. A de facto monopoly in many rural areas that are not profitable for private companies to serve.
Facebook/Twitter are none of these things.
HN: Facebook and Twitter are a stupid, pointless waste of time and you should delete your account and leave those platforms.
Also HN: Facebook and Twitter are essential services and banning people from Facebook and Twitter is a fundamental violation of their human rights.
Or what separates a heavily moderated online forum from a volunteer run online magazine? Is it the asking for submissions part?
So to those who keep saying “the internet as we know it as at stake”, I say... so what? Maybe we got it wrong.
I enjoy Facebook, Twitter, Reddit, etc. You don’t have to use them. Why can’t you respect that not everyone likes or wants to use the same websites that you do?
If one of the other URLs is a better fit, we can change it again.
I think Popehat is great, sometimes, but I also like HN's anti-snark guidelines because it brings the conversation up a level. Would I have made the same choice as dang today? Maybe not. Do I appreciate his transparency? Yes. Do I agree that it fits with the site guidelines? Yes.
As for the comments... I've read them a few times throughout the day. And lemme tell ya, folks are rarely responding to the Popehat article. But, given that the Popehat article was essentially a bunch of links to other articles after a couple of snarky paragraphs framing the issue, that's not too bad. At least one dead comment was griping about Popehat. Womp womp.
Was the change made without the explicit consent of every user? Um... Yes. But what entitles us to that level of control over HN? This isn't a direct democracy, it's a news aggregator. This site also allows us to edit our comments, without the explicit consent of every person who responds to them.
Would it be appropriate to sue HN into oblivion over this? Please, oh please, no.
What entitles you to that level of control over email? If any of the emails you sent were modified by Google, you would be outraged, but we should just accept "editorial control" over Twitter, Facebook, HN, etc.?
I think that's the real debate happening here. It's not just about censorship, it's about exercising control over the content. Many feel strongly that this sort of control should be in the hands of the creators of that content, not editors/moderators/site owners/etc.
> This site also allows us to edit our comments, without the explicit consent of every person who responds to them.
I would be ok if the person who originally submitted the article made that change himself and there was a history associated with the submission that showed the change. That's the creator exercising control.
> This isn't a direct democracy, it's a news aggregator.
It's kind of acting like a newspaper though? If content is being edited, that's the job of an editor. I don't think it's such a far stretch to imagine top comments being edited to improve the conversation, etc. at that point, what's the distinction with a newspaper?
In the couple of years that I've been here, I can only recall one instance of dang editing somebody's comment. He explained precisely what was changed, it was because a typo or something was causing the conversation to go sideways, it was after the edit window had closed, and the author thanked him for making the correction. As far as I can tell, the site maintainers are quite committed to transparency of that kind.
Maybe a "history" feature would be nice. OTOH, I appreciate that HN takes a very slow and deliberate approach to adding features to the site. Personally speaking, most social media crashes my phone browser, and HN is beautiful in its simplicity.
> It's kind of acting like a newspaper though?
If newspapers have live comments sections and don't contain the text of a vast majority of their stories... um, no. This isn't at all like a newspaper.
We usually post in the thread that we changed the URL and/or title and trust readers to be smart enough to figure it out. Some cases are worse than others, and in some of those I'll add replies to particular subthreads explaining that they were posted before the URL or title was changed.
Remember, if the info platform monopolies help the democrats today, they can help the republicans tomorrow.
> Among the most common lies: Section 230 requires sites to choose between being a “platform” or “publisher”
The idea that Twitter moderating its users' posts means it's acting like a "publisher" is nothing but Republican propaganda:
>Furthermore, a number of senators have prominently criticized Section 230. For example, Senator Ted Cruz (R-TX) repeatedly (but completely falsely) claims that Section 230 only applies to “neutral public forums."
Threatening Twitter and Facebook with liability for their users' content is authoritarian suppression of free speech. The US government should not be forcing Twitter or any other social media service to carry the president's re-election propaganda - even if the propaganda is factual, let alone if it's full of holes and lies like the NY Post story.
> If platforms are going to start acting like publishers, they should no longer get special treatment when compared to other publishers.
The GP doesn't argue that Section 230 says this or that, they're arguing that internet companies who act like news sites should be subject to the same laws as news sites.
1) they don't write the stories or in any way pay the journalists
2) news is a minority of the content
3) moderation is not the same thing as curation
4) hosting a story is not the same thing as publishing it.
If I post an NBC News article to Twitter, NBC is the publisher. If that article contains libel, NBC is the one on the hook in court, not Twitter. (However, if Twitter discovered the article was very likely libelous then it would be both reasonable and responsible to restrict sharing the article).
GP is really making one of two authoritarian arguments:
a) Platforms are not allowed to make broad decisions about what sorts of content they want to host. Presumably GP would then also agree that YouTube's ban on pornography means that YouTube is a "publisher," and that every time Reddit remove a racist subreddit that it is acting like a "publisher."
b) If a platform does not want to host the president's dishonest re-election propaganda, they should expect to face financial and legal consequences.
Of course nobody would really say "b" out loud, hence the word games about "you see, Mastadon is a platform but Twitter is a publisher."
Making a factual correction without any additional commentary. Down the memory hole.
If in the future the Democrats are the lying ones, then those lies deserve to be removed too.
> In late 2017, when Facebook tweaked its newsfeed algorithm to minimize the presence of political news, policy executives were concerned about the outsize impact of the changes on the right, including the Daily Wire, people familiar with the matter said. Engineers redesigned their intended changes so that left-leaning sites like Mother Jones were affected more than previously planned, the people said. Mr. Zuckerberg approved the plans.
That is: Facebook decided to intervene to benefit the right. I don't think this is just because of right-wingers at Facebook: surely a large part of it is bad-faith attacks from people like Ted Cruz.
The idea that Twitter and Facebook are conspiring to suppress legitimate criticism of Biden and thereby defeat Trump is plain ridiculous.
 Story is here: https://t.co/sjOYrLQdc3?amp=1 but I got the blurb from this tweet: https://twitter.com/patcaldwell/status/1317140564169625600
As far as I know, they're not broadly removing "half the information" (which I'm taking to refer to conservative viewpoints), but disinformation related to QAnon, voting, covid, etc.
Disinformation is not something that will help anyone make better judgements.
The whole point is that the provider of the interactive computer service (ie "the platform") is not to be treated as the publisher of anything anybody says on the platform.
In the same way, a lot of people would say it would be neutral for media to present arguments that global warming is not man-made, but people who care about scientific fact would claim that even presenting the skeptic argument is non-neutral, sense you are signal boosting an argument with no basis in reality.
For the endless commentary on Trump profiting off the presidency, Trump running an "organized crime family", Trump this Trump that, we have actual hard, concrete evidence that a Vice President's cocaine addicted son was selling access to the office (presumably to fuel his addiction), and on top of that his father lied constantly to the American people about it.
How is this not a scandal?
Facebook has had its thumb on the 'balance' scale in favor of ultra-right-wing sites for half a decade, at least.
- Zuckerberg calling a group of non-profit news sites "not real news" 
- Zuckerberg ordering the algorithm of the news feed to be biased toward promoting Breitbart et. al. 
Furthermore, as for the law, there is a difference between platforms and publishers. Section 230 says that platforms will not be treated as publishers.
"No provider or user of an interactive computer service shall be treated as the publisher..."
FB and Twitter are interactive computer services in this case, and we call them "platforms." The law says they are not to be treated as the publisher of the content that users post on their site. Thus, there is a big difference between a platform and a publisher. That's the whole point of the law.
Downvote all you want, but... that's the law.
The law doesn't establish two categories, called "publisher" and "platform." It says, if User writes something on Website, then Website will not be treated as the publisher of whatever User wrote. Instead, User will be treated as the publisher. It defines who is responsible for the content that is served by Website: the User who wrote it, not the Website that served it. At no point does it create a category called "publisher" who is subject to different rules from a category called "platform."
Section 230 does not need to create those categories because they already exist under the law. Historically, content providers have been treated as either publishers, distributors, or platforms, and there are different rules for those categories.
If a law is saying someone isn't going to be treated as a publisher, it is implicitly saying they are going to be treated as a distributor or a platform.
Section 230 says that internet content providers aren't going to be treated as publishers of user content, while the same law also says that internet content providers will have some of the rights of publishers - for example, by moderating content.
Under Section 230, internet content providers are treated as distributors in some cases, for example where upon request they need to remove content that violates copyright, but not liable as long as they do so. They are treated as platforms in other cases, for example defamatory content. Although in some ways they have even more rights than offline platform providers - traditionally platform providers have a legal requirement to accept all traffic.
So 230 gives internet content providers the privileges, but not the obligations, of traditional publishers, along with the privileges, but not the obligations, of traditional platform providers.
The reasons this was done are spelled out in the findings and policies section of the law. Some of the reasons no longer make sense - I don't really think we need government policies at this point to "to promote the continued development of the Internet". And some of the things that the act called out as beneficial about the internet are being harmed by the current actions of internet content providers. We are seeing them act less and less like "a forum for a true diversity of political discourse".
That's why people are talking about modifying Section 230. If you get the benefits of a traditional publisher, maybe you should get the obligations as well. If you get the benefits of a traditional platform, maybe you should get the benefits as well.
And yes that would be a huge change in the way content is provided on the internet.
The whole point of the law is to say that website owners need not worry about being considered a publisher when they let other people post or comment or whatever.
> Really, this is the simplest, most basic understanding of Section 230: it is about placing the liability for content online on whoever created that content, and not on whoever is hosting it. If you understand that one thing, you'll understand most of the most important things about Section 230.
The way I understand it, these big sites aren't simply hosting content, they are themselves creators through editorializing content and so should not enjoy a blanket immunity.
If they don't get special treatment as opposed to publishers, why is explicitly mentioned that they shall not be treated as the publisher? If there's no difference, what's the point in that?
Are twitter and facebook being fair and transparent?
Isn't the very fact we're having the argument evidence that it's such a difficult problem we need some interim solution while the perfect AI algorithm gets worked out? Some solution that can last for a while if the perfect AI moderators never come.
No, instead it should be expanded to every publication.
We've since changed both the URL and the title.
We've since changed the URL (and the title) in keeping with another HN principle, of favoring original sources. The Popehat article is really just a list of links with a bunch of extra Popehattiness.
The point of the original article wasn’t just that people misunderstand section 230, it’s that republican politicians are conducting a propaganda campaign to willfully misrepresent section 230–and every thread on HN where someone launches into that tired and fallacious “publisher vs platform” spiel is evidence that it’s working. Facts aren’t inherently clickbait just because they displease the conservative HN massive.
The reason we do these sorts of edits is not driven by politics but by the attempt to optimize HN for curiosity (https://hn.algolia.com/?query=curiosity%20optimiz%20by:dang&...). The principles of how we do that have been worked out over the years, and they're not derived from political positions. Curiosity likes to cut across such boundaries—being limited by boundaries is not in its nature.
I get that above a certain threshold of political passion, the feeling becomes that the site ought not to be optimized for curiosity, but rather for political justice or some value like that. That's understandable—those are also good values. HN would just be a totally different kind of site if we did that. The question then is whether a site dedicated to curiosity has the right to exist on the web or not—including under current political conditions. I think it does. Why shouldn't it?
The main thing, though, is that your question doesn't feel like a question. To me it feels like you're trying to conscript me into political battle for a position I don't occupy to begin with. HN commenters who feel strongly on any topic (not just politics) sometimes make stories in which, for whatever reason, I, or rather "dang", gets cast as the enemy. That's a job hazard and inevitable, but those stories are not mine and that's not me.