Hacker News new | past | comments | ask | show | jobs | submit login
Facebook Execs Nixed Employees' Plan to Quell Hate Speech: Report (businessinsider.com)
51 points by elorant 13 days ago | hide | past | favorite | 80 comments





It seems this falls into the category of: Company has idea, investigates it, decides not to go ahead for the time being. Big deal. This is what all businesses do all the time. They task some employees to come up with a plan or to research a possible plan. Then they consider it from various angles, then they might decide to go ahead with it as is, modify it, or table it.

It is always an "Employees' Plan" because the people tasked with doing it are obviously employees, or sometimes consultants. The people making the decision are always "Execs" because they are executing a decision. This headline is designed to transmit a disturbing message, with nothing actually notable behind it.


There is something notable, that a conservative VP protected those users at the expense of more vulnerable ones again.

>Kaplan, the company’s most influential Republican, was widely known as a strong believer in the idea that Facebook should appear “politically neutral,” and his hard-line free speech ideology was in lockstep with company CEO Mark Zuckerberg. (Facebook recently changed its corporate name to Meta.) He bent over backward to protect conservatives, according to previous reporting in The Post, numerous former insiders and the Facebook Papers.

https://www.washingtonpost.com/technology/2021/11/21/faceboo...


At the expense of vulnerable ones?

You can literally use that sentence to quell any free speech just by saying I’m triggered.

If your feelings are hurt, stop reading a particular platform. There are enough groups online you can isolate yourself from any opinion you don’t like.


that'd work except that the effects aren't limited to the platform - harmful information has realworld effects on people's material lives

> like a user who posted an image of a chimpanzee with the caption, "Here's one of Michelle Obama."

Anecdotally, I’ve seen these sort of images shared across Facebook many times and it was a large part of a decision to step back my usage of the product around 2016, including having my account deactivated for several years

This topic is commonly viewed in a political lens (free speech, etc) but I view it in a free market lens: Facebook.com is full of garbage and Meta can ignore user experience issues, such as flamebait and hate speech, to their own detriment. It’s actively difficult, from my experience, to use their product in just about any way without running into heaps of inflammatory garbage.


I don't see any inflammatory garbage. It's all a matter of which friends and pages you follow.

An example in case you’re curious: I once learned about a train meme group from a friend. I’m a transit enthusiast, so I joined and I loved this group. for a while it was one of the highlights of Facebook.com for me. I’m lib-left politically, which fit with the lefty slant of the group, but still there came to be frequent shitstorm threads all within the scope of lefty discourse. for example, upzoning vs gentrification, should we ban cars, should we eat the rich, things like that. Eventually it got to the point where I hated seeing a post from that group on my timeline because I knew it was going to be another comment shitstorm, and so I left the group.

Was it a moderation issue? Was it FB optimizing for engagement? Could algorithmic moderation of inflammatory content have even helped? I don’t know, but what was one of the few things I stayed on FB for, instead became a heap of garbage with the occasional scattered gem.


This is a moderation issue. Also the issue of Facebook as a product vs classic forums. A forum dedicated to transit would have subforums for political discussions, and a subforum for free for all. On Facebook I am not even sure that is possible in the same way.

But implementing a whole site filtering mechanism for derogatory art sounds like an extreme measure.


> But implementing a whole site filtering mechanism for derogatory art sounds like an extreme measure.

I’m not suggesting a specific fix, but rather pointing out that I personally have difficulty with inflammatory content on FB: friends I added back in high school share inflammatory bullshit constantly, groups that I join that ostensibly appeal to my interests seem to frequently devolve into various forms of inflammatory bullshit, etc.

Assuming that I’m not just unusually afflicted by bullshit, I expect in the long term and in aggregate this is a major market issue for Meta


Absolutely moderation.

I belong to scuba diving groups and they all have different rules on off-topicness.

Join or start a group that strictly prohibits political or off-topic discussions.


One of the issues is sometimes when groups grow big enough the moderators start putting in sponsored posts. There is an incentive to "sell out" groups that are big enough. Other times they just become strangely political even if the group originally had nothing to do with politics. As if some people get bored of sticking to the topic after a few years, and decide to use their groups to push their opinions.

It's definitely different group to group though-- some people won't experience it and others are puzzled why the "so I herd u leik mudkips" meme group they joined 10 years ago is spreading antivax images.


...should we eat the rich....

Of course we should. They make a delightful bolognese sauce.

But, seriously, I don't think this phenomenon you're seeing is because of anything Facebook is doing to metaphorically stir the pot. It's because there's not a lot of left unity as a whole. Ask any 5 socialists what their ideal socialist utopia would look like, and you'll get 7 different answers. I see this all the time on similarly-oriented Reddit subs. Reddit doesn't do anything to bring people together besides provide a forum, and they certainly don't do anything to algorithmically engineer conversations to go a certain way.


> I don't think this phenomenon you're seeing is because of anything Facebook is doing to metaphorically stir the pot.

Maybe not stir the pot specifically, but it’s their user experience and they own it. Which post shows up first on my feed greatly alters the user experience, and FB has to make a decision on that. It’s not chronological, and it seems to frequently involve me seeing 3 day old shitstorms.


I see. It sounds like they are showing you posts that get a lot of activity. That does kinda make sense: if a lot of people similar to you have already interacted with a post, that's a signal that you might like it, too.

But, tell me: would it make any difference to you if you saw these "3 day old shitstorms" when they were only 1 day old? If anything, what they really want you to do is discover these posts in their pre-shitstorm posts, so you can be involved in getting the shit flying. It seems like this actually represents a failure of algorithmic moderation, to me.


> It seems like this actually represents a failure of algorithmic moderation, to me.

It's a failure of a lot of things, but ultimately a failure to create a product desirable to anybody not interested in flinging shit.


It seems you may have forgotten the old saying: if you're not paying for it, you're the product. Perhaps these practices create an ideal (or, at least better than what "display everything chronologically would) environment for displaying ads to receptive people.

> It seems you may have forgotten the old saying: if you're not paying for it, you're the product.

Nope, I didn't forget—I don't entirely mind being their product, but it seems like Facebook forgot the fact that, if I'm their product, then they need to make it worth my time to go to it.

> Perhaps these practices create an ideal (or, at least better than what "display everything chronologically would) environment for displaying ads to receptive people.

Maybe it does, but it doesn't seem like they're firing on all cylinders. Instead, it seems like they're trying to get more and more out of fewer and fewer "products".


This is true. Admittedly my usage of Facebook I can only suspect is typical of other “casual” users, started with a period of active use in which I heavily curated and added content streams, then a long tail of passive use where I primarily just consumed content already on my home page. During this long tail I wasn’t looking to curate (especially considering that unfriending is sometimes visible and noticed), so I’d just scroll past bad content; eventually the bad content wore me out and I quit.

I see SO MUCH inflammatory garbage. Every 3rd post is some new sponsored or recommended post from "Canadian Truckers" or "Car Society" complaining about Biden's gas prices (??), or memes about Kardasian plastic surgery, or "Let's Go Brandon", or companies selling socks.

I don't follow ANY companies. I have about 50 friends from high school and college. Actually, one friend owns a coffee shop so I follow their page. How do I let facebook know to stop showing me shit about truckers and right wing terrorism? Like, I block it, and it is back a few days later.


Just unfollow those friends and hide all from those pages. It takes like 2 clicks. Simple.

Maybe you should follow different people. I fail to see how this is Facebook’s fault.

Unfriend or unfollow people who don’t post what you want to see. Join groups you like.

This is like blaming Netflix for a random show they showed you when you actively give them indications you would like it. All you have to do is add shows you want to your queue.


How does Facebook make the leap from "I am friends with Bob" to "I want to see inflammatory right-wing hate"?

Is it because Bob occasionally interacts with inflammatory right-wing hate?

Is it because Bob has second-order friends who like inflammatory right-wing hate?

Is it because Bob himself likes inflammatory right-wing hate?

Why should one have to change their friend list, simply to stop a social media algorithm from shoveling content that that person repeatedly expresses they don't like? Wouldn't it be better to just have a check-box: "I don't want to see right-wing hate" and have Facebook honor that, regardless of what the person's friends' activity is?


Part of the grand tradition since "Bush or Chimp?" which was quite popular for a while. How soon we forget!

"The researchers proposed a tweak so that hate speech against Black, Jewish, LGBTQ, Muslim, and multiracial people would be taken down through an automated system."

Hmm I just read another article on this which said the proposed tweak was actually to STOP automatically blocking hate speech against white people.

I guess in light of that you can see how the quote is factually correct but worded in such a way to completely misrepresent the nature of the proposal


from the original source:

>They were proposing a major overhaul of the hate speech algorithm. From now on, the algorithm would be narrowly tailored to automatically remove hate speech against only five groups of people — those who are Black, Jewish, LGBTQ, Muslim or of multiple races — that users rated as most severe and harmful. (The researchers hoped to eventually expand the algorithm’s detection capabilities to protect other vulnerable groups, after the algorithm had been retrained and was on track.) Direct threats of violence against all groups would still be deleted.

>…

>But Kaplan and the other executives did give the green light to a version of the project that would remove the least harmful speech, according to Facebook’s own study: programming the algorithms to stop automatically taking down content directed at White people, Americans and men. The Post previously reported on this change when it was announced internally later in 2020.

https://www.washingtonpost.com/technology/2021/11/21/faceboo...


Which one is the accurate one? Seems like both are correct

A better way to quell hate speech is to provide the tools for individuals to do it, however they want to define it. Call it a bubble construction kit.

If you think a post is hate speech, mark it as such to exclude it and its creator from your bubble. Allow other users to subscribe to your bubble, and visa versa, such that you share your include/exclude lists with them, out to a given degree of separation, with any exceptions you might list.

So I could add ACLU, NPR and SPLC and various friends, celebrities, and polticians to my bubble and share their exclusions, and you could add NRA, FOX, FedSoc, etc., to yours. We effectively each get our own bespoke hate speech filters while hosted on a common carrier. This is decentralized, scalable, hyperdemocratic moderation. People of all ideologies can peacefully share the same platform, singing Kumbaya in harmony.


People should be empowered to create their own communities. The community aspect is really important and seems to get overlooked by the big social networks.

The big social networks aren't "overlooking" community. Communities with constructive relationships between humans who know each other on a personal level are _competition_ for the social networks whose goal to profit off being the middleman in every social transaction.

- If somebody shows you photos of their new baby in person, you don't have to go to Facebook and look at their ads to see them.

- If you need to buy a car, and you know somebody who is selling a car, you don't need to go to Facebook Marketplace and use them as an intermediary to buy a car from a stranger

- If politicians and businesses are interacting directly with constituents/customers in their community, they don't need to pay Facebook to boost their political ads.

It's not at all surprising, then, that social media companies (particularly Facebook) have done everything they could possibly do to undermine any community that risks drawing people away from the newsfeed.


I’d say you have an extremely zero sum view of the world

“ - If somebody shows you photos of their new baby in person, you don't have to go to Facebook and look at their ads to see them.”

If someone shows me their baby pictures I become better friends with that person. I’m more inclined to show them more of my pictures later on, online and offline. And then usually that’s reciprocated too


In addition to what you described, we can also add a bit more logic: "Only exclude if it is blocked by more than X subscribed lists" or "Only exclude if it is blocked by both Rachel Maddow and Ben Shapiro".

That's putting a lot of pressure on individuals to manually curate and block content. Interestingly, you might end up creating an entirely new class of job -- social media curator. You could sign up for an individual (or multiple) user's block list. Then they curate the information that's informative, accurate, and fits a particular community. It's almost like a return to the old-school newspaper editor...

I'd count blocks in the same way Facebook tracks likes. "2534 People didn't want to see this". Also let people sort content by how many people blocked it, either direction.

That's great!

What button do I click to opt out of hate-speech-fostered discrimination, harassment, abuse, and genocide?

https://www.forbes.com/sites/ewelinaochab/2021/09/23/faceboo...

I'd really like that.


This does not address the problem of “Michelle Obama is a chimpanzee” or the “holocaust did not happen” going viral in one of those communities!

I think the point would be to let these things happen in smaller, isolated communities. You can wall them off from the rest of the world, but still let them share whatever they want.

I guess the benefit would be the ability to get updates from your crazy family members while not having to see everything they post?

Whether or not this is a net positive for the world is a different question.


That seems like an abuse of political power though. That policy would result in limiting exposure of views that are controversial or disagreeable to progressives, since Facebook/Twitter/Reddit/etc. clearly have a highly progressive employee base and moderation policies built around that worldview. Effectively, isolating exposure of some content in this way would tilt the political scales by only allowing other competing (un-isolated) views to benefit from network effects. That to me feels a lot like propaganda and it doesn't make it any more acceptable or holier that it is done by a domestic tech giant rather than a foreign state power.

If that community likes these ideas, that is not anybody else's business. Unless you believe there exists undeniable truth + an algorithm to find it, you have to accept that people can each decide what they believe is true - even [what you and I perceive as] hurtful and misguided ideas such as the above.

> If that community likes these ideas, that is not anybody else's business.

That's obviously not the case in most countries on this earth. Holocaust denial for example is illegal across the EU.


It is only illegal if done "in public" (whatever that is taken to mean). Arguably, a private Facebook group is not "in public".

I will also note that in most cases, these laws don't prohibit such speech unless it is done in a manner likely to lead to harm. How widely or narrowly that is commonly understood is a different question.


Sure it would, as it would mean we didn’t have to see it.

> so that hate speech against Black, Jewish, LGBTQ, Muslim, and multiracial people would be taken down through an automated system

And what happens when the system takes down quotes from the Quran--which Muslims believe to be not only the inspired, but perfect word of God--as "hate speech."


>> so that hate speech against Black, Jewish, LGBTQ, Muslim, and multiracial people would be taken down through an automated system

What about hate speech against white people, or Christians? Have these groups been dehumanized to the point where that is just considered completely acceptable and uncontroversial now?


Considering hateful speech directed at white people to be outside the prohibition is similarly a reflection of the moral precepts of the people making the rules. To be more precise: when someone decries “white people” they’re not talking about people with white skin. They’re typically talking about people of a lower social class. “White people” are those white people whose education or profession doesn’t give them a different group to belong to.

As a “brown” person I think hateful speech directed at “white people” is offensive because so often it’s classist. But I don’t make the rules!


I don't recall any verse in the Quran that compares the black wife of a king or leader to a chimpanzee.

This is a pretty disingenuous slippery slope argument which can be used to argue against any censorship of virulent hate speech on any private platform.


There’s no slippery slope. Twitter will censor posts for misgendering people. Islam has pretty clear views on the separation of men and women, their separate roles, marriage (Mohammad was not a fan of single people!), etc.

I dont think people disagree that antiquated religious texts do contain a lot of hate speech

Which "people?" The secular descendants of white protestants--people who, despite maybe no longer being believers, retain distinctly protestant views of religious authority and interpretation? Yes. But certainly not, say, virtually all Muslims.

I'm not disagreeing with the approach--I'm just pointing out that societies can't help but encode their religious and quasi-religious views onto putatively secular rules. What Twitter or Facebook deem "hate speech" is invariably dictated by what white Americans,[1] think that means.

[1] Or others socialized into that culture. The practice of Islam here in America has a distinct influence of mainline Protestantism quite absent from the practice in Bangladesh. I was at an Islamic wedding for a family member several years ago, and the imam was narrating over passages from scripture to conform it to how Americans view things.


> the imam was narrating over passages from scripture to conform it to how Americans view things

Or the imam had been influenced by American culture and was conforming it to their own views.

I totally agree that people discount how much impact protestantism had/still has, but these views have been secularized and can thrive without any religious framework.

It's all culture, basically; I don't think I'm disagreeing with you.


I agree the imam has been influenced by secular culture. But I’d argue the secular culture has been influenced by Protestantism.

Most religious people are not zealots. I don't think they condone slavery, women's subjugation etc just because they go to church sometimes. You think they hang gays in Turkey?

Besides, the overreliance on the writings of scriptures is a distinctly protestant characteristic


"Overrreliance" on scripture is an extremely important concept in islam. In fact I think every major sect (sunni or shia) and madhab are centered on that "reliance". Even the islamic schools of thought that usually take a less literal approach to the koran would probably be much much literal in their interpretation than your average protestant denomination. Innovating on the scripture by derivation, by cherrypicking or by selectively discarding part of it for social expediency is explicitly not allowed and is one of the biggest sin in islam.

In a way, from an islamic perspective, it's much much better to just acknowledge that you are not living by the teachings of the koran even if that means living a sinful life than it is to try to remake or drastically reinterpret the koran in way that whatever you are doing isn't a sin anymore.

Now obviously that does not apply to the hadiths which are, a part from the extremely consequential early caliphate intestine wars, probably one of the biggest source of inter and inner sect conflict and disagreement. Much more so than the koran itself imo. How to interpret them, how much theological value they hold and even which are valid are all questions that will probably be debated forever.

In fact we (in my direct entourage, not all Muslims obviously ;) ) always find it a bit odd how... cavalier christians can be with their scripture when it comes to picking what they like and discarding what they don't. To me I couldn't see the point, but I know enough to understand that christian theology is much much more complicated and extremely intricate, but that's still what it can look like from my perspective. So it's funny to me to see someone say that not everyone is as literal as... protestants!


I think this is an excellent analysis. In my experience (my family is from Bangladesh) American Muslims tend to get around this by relying on their status as a religious minority. To draw an analogy: what did Jesus think of Roman divorce law? As a religious minority it didn’t matter what the Romans doing, unless they were regulating Jews directly. Similarly, American muslims don’t have to reconcile their religious beliefs and political beliefs because as a practical matter someone else is making the rules.

The dominant group faces a different problem, because someone has to make the rules. “When does life begin?” Clearly society must answer this one way or the other—we won’t let people exercise their personal “choice” at say 2 months after birth. If religion doesn’t decide that question, some other non-falsifiable moral framework will.


The majority of the population of Earth does, but ok…

The so-called Calcutta Quran case never made it out of the first court so in India at least there’s no legal basis for this

I might trust Indian judges in 1985 to respect pluralism more than Facebook employees in 2021.

This article's reporting is false. They say "The researchers proposed a tweak so that hate speech against Black, Jewish, LGBTQ, Muslim, and multiracial people would be taken down through an automated system.", but the original Washington Post article they're summarizing (https://www.washingtonpost.com/technology/2021/11/21/faceboo...) says that the automated hate speech detection system already exists; the proposed tweak was to disable that system for hate speech against any groups other than the listed five, turning it back on only as an aspirational future goal when the algorithm "had been retrained and was on track". This is because the researchers weren't just aiming to quell hate speech in general - they wanted to change the fact that Facebook is (they concluded) "better at cracking down on comments that were harmful against White People".

So if you see a racist post, making derogatory remarks against your race and you report it, Facebook reviews it and says it meets community standards, then you send it for reconsideration and nothing happens for weeks... What are the next steps? Do you report Facebook to the Police or are they immune? I read many times that someone posted something on Twitter and police raided their home the next day. Can they do the same with Facebook for spreading racist messaging?

That would entirely depend on the local laws in your jurisdiction. In the US there is no law against creating or publishing racist content. (I'm not claiming that it's morally right to do that, but it's not illegal.)

In my country it is illegal, so I would expect at very least an order to block the site if they are not going to comply with the law.

The next steps are you move on. Dwelling on a Facebook post for weeks is not healthy.

I am trying to establish whether indeed we are living in a two tier society, where laws mostly only apply to the poor.

Which laws?

Hate crime

The headline’s framing is absolutely yellow journalism.

Good. "Hate speech" is a false label. All it amounts to is speech that some subset of society finds disagreeable and seeks to suppress. Some things that are classified as "hate speech", like explicitly racist content, are disagreeable to most of society. Even in those cases, I prefer upholding a universal right to free speech rather than suppressing open discourse. The ACLU of old became famous for defending civil liberties even when it came to disagreeable groups like the KKK. Even though the ACLU has since abandoned their classically liberal mission (https://www.tabletmag.com/sections/news/articles/the-disinte...), I think it is worth thinking back to how Ira Glasser ran the ACLU previously, in a principled manner where it was important to defend those rights at all times.

For the modern world, I think it is important to expect that giant private conglomerates uphold the civil liberties given to us under law in those private spaces they control. After all, they function more like a public utility and common carrier. Otherwise, they will abuse those powers in various ways, for example if their employees have a particular set of biases. It is unacceptable for our most basic and ubiquitous forms of communication to be controlled by tech giants that ignore fundamental societal values like free speech.

I also want to point out that not everything that is labeled "hate speech" is clearly bad. For example, Ibram X Kendi defines "racist" as anything that perpetuates inequities across racial groups, meaning differences in outcomes (rather than differences in opportunity). I think most people would not consider a difference in outcomes to be racist, but under Kendi's definition (which is part of the "anti-racist"/CRT movement), even something like an objective standardized test can be viewed as racist. So should speech defending standardized testing be censored on social media? What about speech defending Kyle Rittenhouse, which Twitter/Facebook/etc. have been censoring under their policy around violence/dangerous individuals, despite the recent ruling? Similarly, gender identity and trans issues are highly controversial and there is significant debate on how to deal with it. Does it really make sense to censor open discussion of this issue while it is so current? In all these instances we already have a tried and true solution - let the speech happen, let debate run its course, and avoid simply letting those in power control speech and other outcomes. I'm not sure why that ever became controversial.


[flagged]


Aren't all words just made up?

Neither does "poetry" or "obscenity" insofar as these are just labels for different styles/uses of language. That doesn't stop them from being useful.

"hate speech" is speech you are offended by, duhhh

“FB nixed plan to censor speech they hate”

I think preciously few people necessarily dislike the idea of quelling hate speech. The real problem is who gets to decide what classifies as hate speech. When there is a large contingent of america chanting nonsense like "silence is violence" one moment and "punch terfs" the next moment, I'd rather that nobody gets to make the decision of what does and does not constitute hate speech.

I think there's a minority, but still substantial number of authoritarians, who absolutely do like the idea of quelling what they view as hate speech.

The classification problem is not one that can ever be agreed to though as you point out, so I'd rather that larger platforms like Facebook simply provide tools to choose and mark the kind of content you'd prefer to see rather than blocking comments that a vocal group dislikes. The real solution to this problem was found many decades ago, and is of course ignored. Go back to teaching children stuff like the following rather than making victimhood into a currency.

> Sticks and stones may break my bones,

> But words can never hurt me.


What qualifies as hate speech and what to do about it is certainly arguable, but the underlying issue is that in both the U.S. and rest of the world there's a long history of powerful social groups fomenting actual, measurable, real-world violence and suppression of minority social groups.

It's true that social media is full of pearl-clutching, and that's what you propose to remedy with your "sticks and stones" curriculum, but it doesn't address the real problem of hate speech. And again, I can't stress this enough: I'm not saying that Facebook deleting posts is any kind of answer; what I'm saying is it seems you think the question of hate speech has to do with people being offended rather than people being actively harmed. Given the hyperbole in the media (social or otherwise) it's understandable why you would think that was the case, but social media isn't the best place to get your information.


Imagine making a phone-call and talking with your friend about some hot political topic and then an AT&T operator bursts in and says that this call is now blocked because you're engaging in hate-speech that might theoretically cause harm. Is this where we want society to go?

As far as real world harm when it comes to permitting or not permitting speech goes, I'd put the danger of effective societal censorship creating authoritarianism that harms masses of people as so far beyond the level of harm created by some random insane person who sees some negative comments about some group on social media that it's not even on the same planet of harm.

In and of themselves, words cannot cause anybody harm.

As far as your concern about people being actively harmed, that's a big topic that's certainly debatable, both in terms of whether anybody is being harmed to a discussion of who is being harmed. I'm not sure if this is the best venue for this kind of debate because there's a lot that cannot be spoken about in a forum like this.

But I think your preferred social media network of choice isn't the proper avenue for this task. Society has developed a police and criminal justice system over thousands of years to prevent and punish actual criminal acts (violations of life, liberty, and property) while at the same time taking great care to protect the rights of the accused. It's imperfect in many ways, yes of course, but at the moment it's the system that's actually credible. The idea that society is expecting people like Mark Zuckerberg to effectively become the society-shaping, crime-prevention czar seems insane to me.


It's like talking to a wall.

> The real problem is who gets to decide what classifies as hate speech.

This is really hard because there are so many claims done in bad faith.

For example, comparing Michelle Obama to a chimpanzee is incredibly racist, but people will claim it's not, and will claim it's no different than comparing Marjorie Taylor Greene to a neanderthal. But this is entirely a bad faith claim, since the comparison of black people to primates has been a racist trope that spans decades or even centuries.


Thank you



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: