Hacker News new | past | comments | ask | show | jobs | submit login
Google accused of ‘double standard’ for punishing publishers for user comments (sociable.co)
52 points by hinchlt 4 days ago | hide | past | favorite | 34 comments





I am worried that US senators confuse 230’s protection from legal action which some mystical barrier that protects you from all consequences.

It’s like the people who claim that moderation infringes on free speech. “Not running ads” is not the same as legal action.


Ironically, if nobody but Google can benefit from Section 230 immunity, because people must obey Google's content moderation rules in order to remain viable in Search and Ads, then it's even more fuel to remove Section 230 to level the playing field.

[I work at Google, unrelated to Ads, I'm not a lawyer, views are my own, caveat emptor, etc.]

This pushes the limits of my section 230 knowledge, but I think you've got this backwards. A company that wants to comply here needs Section 230 to exist, a company that is okay with ignoring Google doesn't care about Section 230.

Section 230 immunity isn't necessary for things that are completely unmoderated. If the comment sections are literally entirely unmoderated, they fall under the pre-existing statute (Cubby, Inc. v. CompuServe, Inc).

However, if the site wishes to moderate comments for some reason, they could be held liable for comments that stay up but are problematic (libelous etc.). So without section 230, a site would be in a catch-22. Section 230 continuing to exist avoids this problem.


> if the site wishes to moderate comments for some reason, they could be held liable for comments that stay up but are problematic (libelous etc.). So without section 230, a site would be in a catch-22. Section 230 continuing to exist avoids this problem.

You have this exactly right. It's a fairly straightforward law. It's kind of bizarre how far people (especially journalists!) misunderstand it.


The old guard Journalists tried to fabricate "techlash" out of thin air. They are lying and showing themselves to be hypocrites. Look at the softballs given to power and abusers but as soon as they see a scapegoat for their business sucking the knives come out.

> a company that is okay with ignoring Google doesn't care about Section 230

This ignores the factual reality that Google is a monopoly in several verticals, and the number of companies that can ignore Google is actually zero. Every company needs to be in Google's good graces whether it be for advertising or app installation on mobile phones or search visibility.

Even all of Google's direct competition in any given market needs to support Google in other markets. Google is inescapable and compliance with their policies is as mandatory as actual law.


Or break Google apart. In a perfect market, ad companies would compete and their terms around UGC would be a part of that competition. Since Google is basically the only game in town, break them apart until they are no longer in a position to dictate what speech is acceptable on the internet.

Google should absolutely have the right to say no to running ads on sites they think are objectionable. The issue is that there's not a healthy market.


Google doesn’t want to dictate what is acceptable content on the internet. The people who buy ads don’t want to be associated with racism.

I am sure there are plenty of companies that would support what some consider racist content. Heck, remember segregation was popular back in its day and the poeple who voted against desegregation are still alive.

That might be true but I would give the counter example of the adpocolips. Most companies don't want to look like they are for racism.

Sure maybe not multinationals, but there was still a whites only bar in Alexandria till like 2005. Racists will find support from their own kind.

Honestly it would be better for this sort of thing to be banned from the top down by governments, but they seem loath to call any "white" nationalist group terrorists no matter how many weapons they bring to rally and how many people they injure.

White is in scare quotes, because you never know who they are counting aa such...


That is literally nonsensical. Section 230 means you need to sue the actual speaker of the speech and not where they posted. Google could start acting utterly insane (which would rapidly give rise to competitors) and that wouldn't compromise Section 230 at all because Google is not the court system!

Legally, the playing field is even. The question is whether Google makes money from hateful content posted by users on their platforms. And then the topic indirection comes up.

This article avoids the distinction between law and economy at all costs because it would invalidate its entire thesis. I am not convinced Google is guilty of a double standard here.


Legally you are right, but I think it's a congressional inquiry so lawmakers are looking to see the flaws in the laws application in real life.

If there are flaws, then it needs to be patched or abandoned.


This post is a poorly disguised argument that racist speech does not create hostile environments for non-racists, and misapplies one aspect of Section 230 while ignoring another that counters their own argument.

Section 230 explicit grants Google, and publishers, the authority to restrict speech when it is offensive:

> (A) any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

Racism is offensive content under these terms. If the publishers do not exercise their Section 230 rights to deny service to racist content and the racists that post it, Google will exercise their Section 230 rights to deny service — to restrict their 'banner ad' dialect^ of speech — to racist content and the sites that contain it.

To cut off the usual replies that try to invoke 'free speech' and 'but what about an unrelated example that doesn't include hate speech':

Racism is hate speech, and hate speech is not a form of protected speech. The only slippery slope to be considered here is 'what is considered hate speech?'. Racism is, unquestionably, hate speech. There is no slippery slope for racist speech. It's already at the bottom of the pit.

If this article were about content other than hate speech, it would be interesting. As it stands, it's just 'we shouldn't demonize racism' in the usual 'first amendment' style of overcoat.

Dissecting the article in specific, I find:

> Google threatened to demonetize The Federalist news outlet on the grounds that readers were leaving “racist” comments that advertisers didn’t want to be associated with.

Google threatened to withdraw service from a news outlet over racist user comments.

> The Federalist was targeted only because of its readers’ comments

Google was reacting only to the racist comments and not to the content published by the site operators themselves.

> the alternatives were to either ban comments altogether, moderate/censor them, or make them more difficult to access

Google identifies several technical solutions, but then we have here this most interesting appendix from "The Sociable" itself:

> — all of which discourage real engagement

This phrase suffix attempts to frame "take action against racist comments" as "unrestricted speech is the only 'real' form of engagement". This is false. Racist comments discourages real engagement. Discouraging racist comments discourages racist engagement. Racist engagement is not "real" engagement. It's just racist engagement.

> This means that publishers have to make their sites Google-friendly

"The Sociable" would like to remind you that the issue here is that Google is hostile to "racist comments" — yet, somehow, it's not interesting to them that a major news outlet, The Federalist, was found to have such a degree of racism in their user comments that Google bothered to react at all.

^ Hieroglyphics and GIFs both prove that images are a form of speech. So, then, are banner advertisements.


Unfortunately, the term "racism" has almost completely lost its meaning. In 2020, we have a large group of people who's professional existence 100% depends on finding and combating racism, as well as plugging themselves as a wealth redistribution middleman in the name of equality. It's the Cobra Effect [0] all over again. If you pay people to find racism, they will find racism, alright. They will also try to frame any criticism pointing to their own toxicity as racism, as otherwise they would risk losing their position.

I think, we need to distinguish actual racism (i.e. treating people differently based on their skin color) from criticism on specific social justice politics that are extremely inefficient, divisive, and make the problem worse in the long term.

[0] https://en.wikipedia.org/wiki/Cobra_effect


Replacing "racism" with "racist speech" is indeed more accurate, though I'll leave it unedited to keep the context for your comment intact.

Racism is hate speech, and hate speech is not a form of protected speech.

Where did you ever get this idea? Hate speech is still protected under law. Of course Google doesn't have the same restrictions as the government, the could probably choose to censor anything containing the word "banana", but that doesn't change the status of free speech in general.


What does racism have to do with it? The crucial part of section 230 is [1]:

> No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.

Meaning, if someone makes a defamatory comment on foo.com the person who made the comment is liable not the operators of foo.com. And crucially Section 230 makes it so this holds true regardless of whether foo.com moderates comments or not. Foo.com can remove content unrelated to whatever Foo.com is about. Prior to Section 230, Foo.com had to moderate no content, otherwise it might be held liable for the content users submit to the site.

Foo.com can remove racism, but not because racism isn't protected speech. As far as government is concerned, there's no distinction between "hate speech" and any other form of speech [2]. Foo.com can remove it because it's a private company. Protections for speech for the most part only apply to the government curbing speech. Foo.com, Google, Facebook, and so on are private companies they can ban whatever arbitrary content they want - not just hate speech or racist speech. They could suddenly decide that cat photos are forbidden and ban any users and groups that post pictures containing cats. None of this is illegal, nor would it make them liable for users' content.

What's controversial here is that Google is making users curate their own comments in compliance with what Google wants. It's outsourcing its moderation to its own users. But there's really nothing new about this. Reddit has been massively successful in pushing most moderation responsibilities onto its own users. If moderators of a subreddit don't keep their subreddit clean, the whole subreddit gets shut down or has moderators removed. These news here is that Google is starting to emulate this model.

1. https://www.eff.org/issues/cda230

2. https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat....


hate speech is a legally meaningless category under us law. where are you drawing the basis for this distinction of 'protected speech' from? also importantly, it seems like racism is only viewed as needing moderation within one context. twitter is currently full of indians and chinese users posting what would ostensibly be considered virulently racist comments about eachother's ethnic groups and yet this is not subject to moderation. no one pulled facebook's ads when they enabled multiple ethnic cleansings etc. it seems like the only racism people in power care about is arguably the least consequential: facebook dads and edgy teens using too many gamer words.

there will never truly be a moderation solution that can actually handle racism in either language or substance. ban one slur and 1000 more will be invented while non offending speech and the spectrum of permissible thought are continually eroded by this absurd and ill-conceived 'scorched earth approach towards making people not say the n-word'


> Racism is hate speech, and hate speech is not a form of protected speech.

You mean in the US? If so, which Supreme Court case are you basing this on?

Edit: As dextralt pointed out, I ask not because I expect HN posts to adhere to scientific journal standards, but because that claim is contrary to every Supreme Court decision in recent history, so I have trouble figuring out how you got that idea.

Edit 2: As dextralt's post is now flagged, let me reproduce what they cited:

Hate speech in the United States is not regulated due to the robust right to free speech found in the American Constitution. The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment. -- https://en.wikipedia.org/wiki/Hate_speech_in_the_United_Stat...


Please use HN’s reply feature to reply at HN. I do not reply to edit-replies as you’re using. I view them as a form of conversational warfare.

I would if I could - my posts were rate-limited. It seems to only take 1-2 downvotes to get rate-limited to 2 posts/hour or less.

You'll want to email the mods about that, using the footer Contact link.

I didn't think this would be needed in this post, but since you've asked for advice on the law —

Disclaimer: I am not your lawyer, I have not prepared citations for your review, please seek legal counsel if you’re considering actions based on my opinion, etc etc.

dextralt 4 days ago [flagged]

Oh reeeeeeeeeeally? You didn't think it was "needed"?

>Hate speech in the United States is not regulated due to the robust right to free speech found in the American Constitution.[1] The U.S. Supreme Court has repeatedly ruled that hate speech is legally protected free speech under the First Amendment. The most recent Supreme Court case on the issue was in 2017, when the justices unanimously reaffirmed that there is effectively no "hate speech" exception to the free speech rights protected by the First Amendment.


Please stop posting in the flamewar style to HN. We've asked you this before.

https://news.ycombinator.com/newsguidelines.html


Why is this being down-voted? It’s a thoughtful argument.

I suspect it's because "hate speech" is not, in fact, a category of unprotected speech under US law. Most legal scholars agree with this point, including many who are absolutely not fans of hate speech, like Ken White of Popehat:

https://www.theatlantic.com/ideas/archive/2019/08/free-speec...


It’s an argument that pulls in a bunch of political baggage and just isn’t relevant to the source article. If Google does indeed have a double standard - if they demonetize other websites for hosting nasty comments while defending their right to host nasty comments themselves - that’s bad regardless of what Section 230 permits.

There is no double standard in arguing that platform creators shouldn't be legally liable for racist user comments while simultaneously arguing that platform creators don't need to be paid for hosting racist user comments.

I'm not a Googler, but my impression was that Google mostly holds themselves to this standard. Youtube routinely demonetizes videos with questionable content and blocks ads aimed at racist keywords.


For whatever it's worth, I would support the concern you're describing if it were simplified to remove the whole Section thing entirely. Specifically:

"It would be a double standard if Google refuses to display advertising on other third-party websites alongside racist user content, but then displayed advertising on their own first-party websites alongside racist user content."

Google clearly states that they have automated detection of racism, so highlighting examples of Google displaying for-profit advertising on racist speech in Google Groups posts or YouTube comments would be vastly more meaningful an argument that they're applying a more restrictive standard to their customers than they apply to themselves.

Either Google does hold those of their own sites that display advertising on user content to the same burden of moderation that they demand from the advertising customers — or they do not, and are therefore hypocritical to do so with others. Whether or not section 230 exists, or is applicable, it simply doesn't matter.


They make the argument hate speech is not a form of protected speech, but it is.

I apologize for my poorly constructed argument. I should have spoken more clearly and I was in a rush. I'll probably screw this explanation up too, but the apology stands regardless.

Hate speech isn't Section 230 protected speech, in my opinion. Copying from above, with modified italics placement:

> any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected

This is a list of forms of speech that providers, under Section 230, may safely disregard constitutional protections for. Racist speech and other forms of hate speech are, at minimum, 'otherwise objectionable'; and therefore they are not forms of speech protected by Section 230.




Applications are open for YC Winter 2021

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: