Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Last I checked, content moderation can’t cause cancer or birth defects.


Yes, but if you're not being pedantic for the sake of scoring points, all the aforementioned things are capable of causing significant harm if mishandled, so it's still quite relevant.


Having a post arbitrarily or even maliciously being removed from a social media platform being equated to physical harm is an extremely absurd, out of touch, pampered, and privileged position.

Mishandling of heavy metals can cause lifelong affects not just to those handling them, but to anyone in the vicinity.[0] it’s estimated that 1M people die per year from lead poisoning[1].

Content moderation cannot directly cause any physical harm. If you consider indirect physical harm related to all social media (which I’d have more sympathy toward), it would not come close to the affects of heavy metals and other substances known to the state of California to cause cancer, birth defects, or other reproductive harm.

0 - https://amp.theguardian.com/world/2009/aug/20/china-children...

1 - https://www.who.int/news/item/23-10-2022-almost-1-million-pe...


But the government isn't allowed to regulate the speech based on harm unless that harm would result from lawless action which is also incited by the speech and probable to happen based on the speech.


People should really be more considering this as "Truth in advertising" type law. This isn't about whether Twitter follows it's policies as written, as you would be hard pressed to convict any company for merely not following it's own rules for something that isn't illegal (allowing inflammatory speech on your platform, or even signal boosting such speech), but more at requiring companies be honest about how they moderate their consumer participation.

This law is so consumers can make educated choices about the platforms they want to use.


What is the significant harm caused by mishandled content moderation?

... for that matter, what does it mean for content moderation to be "mishandled?"


> What is the significant harm

Stochastic terrorism.


That's a very good point and I have no arguments with it.

Unfortunately, nothing about the California law really addresses it. The Fifth Circuit Court decision regarding coercion of social media sites will bind to the states via the Fourteenth Amendment, so California can't really enforce anything if they disagree with a company's moderation policy.

That means the law reduces to perfunctory data collection, and it doesn't really tell consumers anything that logging into the site and going "Gee, this site sure is full of white supremacists advocating stochastic terrorism and nobody does anything about it" wouldn't tell them.


Is not the same as actual terrorism.


Irrelevant, though, as it still causes harm, which is the original differentiator.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: