"It is “axiomatic,” the Supreme Court held in Norwood v. Harrison (1973), that the government “may not induce, encourage or promote private persons to accomplish what it is constitutionally forbidden to accomplish.” That’s what Congress did by enacting Section 230 of the 1996 Communications Decency Act, which not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so."
I wish we would revisit these rulings in light of the destructive potential of software.
We accept that a person can’t cry out “fire” in a theatre to induce a panic. Why can’t we accept that coordinating messages for propaganda/special interests through misleading ads or fake accounts to also be damaging?
Why can a country like Saudi Arabia or Israel both whitewash its crimes on Twitter/Facebook and suppress its enemies? Why can corporations and rich political campaigns specialize ads to prey on fears in an individualized manner to advance its agenda?
Such opinions will trigger endless debates when the damage on the ground is real and ongoing. Post-reality is truly the terrible state of this world.
Isnt the government dictating the of speech of private companies/people a bigger violation?
230 has nothing to do with censorship/moderation. If anything, it permits more types of content by absolving platforms from the responsibility.
Should 230 remove the right for companies/individuals/platforms to have a say in the content they publish? Should platforms not be able to make decisions about how they moderate their own platforms?
230 is a legal protection afforded to platforms which host user-generated content. It absolves them of legal liability for the content posted by their users, providing they moderate illegal content properly.
> Should 230 remove the right for companies/individuals/platforms to have a say in the content they publish?
If they wish to be afforded the legal protections offered by 230, yes. It is a choice - not an obligation. A trade-off.
> Should platforms not be able to make decisions about how they moderate their own platforms?
They should be able to, but, again, accepting the responsibility of being a platform rather than a publisher means that there should be limits to what can be considered inappropriate. There is certainly no easy answer for where that line is drawn, but it does need to be drawn. A heavy-handed moderation policy can effectively become publishing requirements.
That interpretation is backwards. The "default state" of these websites would be getting sued into oblivion. 230 protected them from that, providing they act as neutral platforms. As they have continually expanded the list of content which is not allowed, whether or not they are acting as a neutral platform is being called into question.
Whenever one group accumulates too much power, that power is inevitably abused. The point of the policy is to limit the extent to which companies can control information.
I read the Tech Dirt article [1] and I don't understand how 230 has anything to do with: "permits tech companies to censor constitutionally protected speech". Isn't all about enabling tech companies to actually publish speech without being sued?
Companies and individuals have always been free to censor constitutionally protested speech (unless they are common carrier or there is some other special exception).
Section 230 protects companies from liability in the case they censor and fail to censor unprotected speech.
> which not only permits tech companies to censor constitutionally protected speech but immunizes them from liability if they do so
Without that, they are not immune from liability for what someone else wrote. That means, they would censor more, because they would be liable full stop.
Private companies are allowed to censor on their servers by default, section 230 does not allow them to censor more then before.
> Without that, they are not immune from liability for what someone else wrote. That means, they would censor more, because they would be liable full stop.
> Private companies are allowed to censor on their servers by default, section 230 does not allow them to censor more then before.
In the absence of 230, platforms would either vigorously censor everything, or would moderate nothing except for outright illegal content (e.g. child pornography). §230 gave more flexibility to allow some "community standards" without subjecting the platform to liability as a publisher.
https://www.wsj.com/articles/save-the-constitution-from-big-...?