It’s like the people who claim that moderation infringes on free speech. “Not running ads” is not the same as legal action.
This pushes the limits of my section 230 knowledge, but I think you've got this backwards. A company that wants to comply here needs Section 230 to exist, a company that is okay with ignoring Google doesn't care about Section 230.
Section 230 immunity isn't necessary for things that are completely unmoderated. If the comment sections are literally entirely unmoderated, they fall under the pre-existing statute (Cubby, Inc. v. CompuServe, Inc).
However, if the site wishes to moderate comments for some reason, they could be held liable for comments that stay up but are problematic (libelous etc.). So without section 230, a site would be in a catch-22. Section 230 continuing to exist avoids this problem.
You have this exactly right. It's a fairly straightforward law. It's kind of bizarre how far people (especially journalists!) misunderstand it.
This ignores the factual reality that Google is a monopoly in several verticals, and the number of companies that can ignore Google is actually zero. Every company needs to be in Google's good graces whether it be for advertising or app installation on mobile phones or search visibility.
Even all of Google's direct competition in any given market needs to support Google in other markets. Google is inescapable and compliance with their policies is as mandatory as actual law.
Google should absolutely have the right to say no to running ads on sites they think are objectionable. The issue is that there's not a healthy market.
Honestly it would be better for this sort of thing to be banned from the top down by governments, but they seem loath to call any "white" nationalist group terrorists no matter how many weapons they bring to rally and how many people they injure.
White is in scare quotes, because you never know who they are counting aa such...
This article avoids the distinction between law and economy at all costs because it would invalidate its entire thesis. I am not convinced Google is guilty of a double standard here.
If there are flaws, then it needs to be patched or abandoned.
Section 230 explicit grants Google, and publishers, the authority to restrict speech when it is offensive:
> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
Racism is offensive content under these terms. If the publishers do not exercise their Section 230 rights to deny service to racist content and the racists that post it, Google will exercise their Section 230 rights to deny service — to restrict their 'banner ad' dialect^ of speech — to racist content and the sites that contain it.
To cut off the usual replies that try to invoke 'free speech' and 'but what about an unrelated example that doesn't include hate speech':
Racism is hate speech, and hate speech is not a form of protected speech. The only slippery slope to be considered here is 'what is considered hate speech?'. Racism is, unquestionably, hate speech. There is no slippery slope for racist speech. It's already at the bottom of the pit.
If this article were about content other than hate speech, it would be interesting. As it stands, it's just 'we shouldn't demonize racism' in the usual 'first amendment' style of overcoat.
Dissecting the article in specific, I find:
> Google threatened to demonetize The Federalist news outlet on the grounds that readers were leaving “racist” comments that advertisers didn’t want to be associated with.
Google threatened to withdraw service from a news outlet over racist user comments.
> The Federalist was targeted only because of its readers’ comments
Google was reacting only to the racist comments and not to the content published by the site operators themselves.
> the alternatives were to either ban comments altogether, moderate/censor them, or make them more difficult to access
Google identifies several technical solutions, but then we have here this most interesting appendix from "The Sociable" itself:
> — all of which discourage real engagement
This phrase suffix attempts to frame "take action against racist comments" as "unrestricted speech is the only 'real' form of engagement". This is false. Racist comments discourages real engagement. Discouraging racist comments discourages racist engagement. Racist engagement is not "real" engagement. It's just racist engagement.
> This means that publishers have to make their sites Google-friendly
"The Sociable" would like to remind you that the issue here is that Google is hostile to "racist comments" — yet, somehow, it's not interesting to them that a major news outlet, The Federalist, was found to have such a degree of racism in their user comments that Google bothered to react at all.
^ Hieroglyphics and GIFs both prove that images are a form of speech. So, then, are banner advertisements.
I think, we need to distinguish actual racism (i.e. treating people differently based on their skin color) from criticism on specific social justice politics that are extremely inefficient, divisive, and make the problem worse in the long term.
Where did you ever get this idea? Hate speech is still protected under law. Of course Google doesn't have the same restrictions as the government, the could probably choose to censor anything containing the word "banana", but that doesn't change the status of free speech in general.
> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
Meaning, if someone makes a defamatory comment on foo.com the person who made the comment is liable not the operators of foo.com. And crucially Section 230 makes it so this holds true regardless of whether foo.com moderates comments or not. Foo.com can remove content unrelated to whatever Foo.com is about. Prior to Section 230, Foo.com had to moderate no content, otherwise it might be held liable for the content users submit to the site.
Foo.com can remove racism, but not because racism isn't protected speech. As far as government is concerned, there's no distinction between "hate speech" and any other form of speech . Foo.com can remove it because it's a private company. Protections for speech for the most part only apply to the government curbing speech. Foo.com, Google, Facebook, and so on are private companies they can ban whatever arbitrary content they want - not just hate speech or racist speech. They could suddenly decide that cat photos are forbidden and ban any users and groups that post pictures containing cats. None of this is illegal, nor would it make them liable for users' content.
What's controversial here is that Google is making users curate their own comments in compliance with what Google wants. It's outsourcing its moderation to its own users. But there's really nothing new about this. Reddit has been massively successful in pushing most moderation responsibilities onto its own users. If moderators of a subreddit don't keep their subreddit clean, the whole subreddit gets shut down or has moderators removed. These news here is that Google is starting to emulate this model.
there will never truly be a moderation solution that can actually handle racism in either language or substance. ban one slur and 1000 more will be invented while non offending speech and the spectrum of permissible thought are continually eroded by this absurd and ill-conceived 'scorched earth approach towards making people not say the n-word'
You mean in the US? If so, which Supreme Court case are you basing this on?
Edit: As dextralt pointed out, I ask not because I expect HN posts to adhere to scientific journal standards, but because that claim is contrary to every Supreme Court decision in recent history, so I have trouble figuring out how you got that idea.
Edit 2: As dextralt's post is now flagged, let me reproduce what they cited:
Hate speech in the United States is not regulated due to the robust right to free speech found in the American Constitution. The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment. -- https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat...
Disclaimer: I am not your lawyer, I have not prepared citations for your review, please seek legal counsel if you’re considering actions based on my opinion, etc etc.
>Hate speech in the United States is not regulated due to the robust right to free speech found in the American Constitution. The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment.
I'm not a Googler, but my impression was that Google mostly holds themselves to this standard. Youtube routinely demonetizes videos with questionable content and blocks ads aimed at racist keywords.
"It would be a double standard if Google refuses to display advertising on other third-party websites alongside racist user content, but then displayed advertising on their own first-party websites alongside racist user content."
Google clearly states that they have automated detection of racism, so highlighting examples of Google displaying for-profit advertising on racist speech in Google Groups posts or YouTube comments would be vastly more meaningful an argument that they're applying a more restrictive standard to their customers than they apply to themselves.
Either Google does hold those of their own sites that display advertising on user content to the same burden of moderation that they demand from the advertising customers — or they do not, and are therefore hypocritical to do so with others. Whether or not section 230 exists, or is applicable, it simply doesn't matter.
Hate speech isn't Section 230 protected speech, in my opinion. Copying from above, with modified italics placement:
> any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected
This is a list of forms of speech that providers, under Section 230, may safely disregard constitutional protections for. Racist speech and other forms of hate speech are, at minimum, 'otherwise objectionable'; and therefore they are not forms of speech protected by Section 230.