Hacker News new | past | comments | ask | show | jobs | submit login

What I think I just read is that content moderation is complicated, error-prone, and expensive. So Meta is going to do a lot less of it. They'll let you self-moderate via a new community notes system, similar to what X does. I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.

They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore. This is another big win for Meta, because minimizing their investment in content moderation and simplifying their product will reduce operating expenses.






  > it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.
To me it sounds better for large actors who pay shills to influence public opinion, like Qatar. I disagree that this is better for either Facebook users, or society as a whole.

It does however certainly fit the Golden rule - he with the gold makes the rules.


I was under the impression that Community Notes were designed to be resistant to sybil attacks, but I could be wrong. Community Notes have been used at Twitter for a long time. Are there examples of state-influenced notes getting through the process?

Twitter's Community Notes were designed to be resistant to sybil attacks. Meta is calling their new product Community Notes, but it would be a mistake to assume the algorithms are the same under the hood. Hopefully Meta will be as transparent as Twitter has been, with a regular data dump and so on.

Qatar is not well known for paying people to bot on social media. They play the RT game by using their news network Al Jazeera to do that instead and give their propaganda a professional air. The first country to do this was India[1]. Israel has special units in the army to do this[2]. At this point so many countries pay people to do what you say, but Qatar doesn't, from what I can tell. If you have proof of it, I'm all ears.

I was cautiously optimistic when this was announced that India and Saudi Arabia (among others, incl. Qatar) might see some pushback on how they clamp down on free speech and journalism on social media. But since Zuck mentioned Europe, I fear those countries will continue as they did before.

[1] https://en.m.wikipedia.org/wiki/BJP_IT_Cell

[2] https://www.bbc.com/news/blogs-news-from-elsewhere-23695896


How is that different from fact checkers? They can also be driven by large actors who pay shills to influence public opinion?

Only the name "Community Notes" is less misleading then "Fact checkers".


Fact checkers are employed by Meta?

And you are trying to say that makes it better?

Sure, I'll trust the leadership of this huge commercial company, famous for lots of controversies reagarding privacy of people. I'll trust them to decide for me what is true and what is not.

Great idea!


You can just pay people, regardless of their place of employment.

Who are pushed by the government to censor vaccine side effects:

https://www.cnbc.com/2025/01/10/mark-zuckerberg-says-biden-p...


> it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.

Or maybe such people have far better things to do than fact check concern trolls and paid propagandists.


I pay for some news subscriptions now. I actually love it. Read it, support journalism , log off. Done.

Right, so from where?

Many of us might pay for journalism if we knew who was producing content not already beholden to some ridiculous bias sink.


Checkout Ground News. Then you can choose your specific poison :)

There do seem to be a lot of people who enjoy fact checking concern trolls and paid propagandists.

I'm not sure if they do more good than harm. Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation. It just makes them look as frothing as trolls claim they are.

And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.


> Often the entire point seems to be to get those specific people spun up, realizing that the troll is not constrained to admit error no matter how airtight the refutation.

Your point is exactly why I can’t take anyone serious who claims that randoms “debating” will cause the best ideas to rise to the top.

I cant count how many times i’ve seen influencer propagandists engage in an online “debate”, be handheld walked through how their entire point is wrong, only for them to spew the exact same thing hours later at the top of every feed. and remember these are often the people with some of the largest platforms claiming they’re being censored … to millions of people lol.

it’s too easy to manipulate what rises to the top. for debate to be anything close to effective all parties involved have to actually be interested in coming closer to a truth. and the algorithms have no interest in deranking sophists and propagandists.


> And yet, it's also unclear if any other course of action would help. Despite decades of pleading, the trolls never starve no matter how little they're fed.

Downvotes that hide posts below a certain threshold have always seemed like the best approach to me. Of course it also allows groups to silence views.


What I heard is that trying to maintain sane content is less profitable than the alternative, and definitely less politically advantageous.

> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.

Strong disagree. This is a very naive understanding of the situation. "Fact-checking" by users is just more of the kind of shouting back and forth that these social networks are already full of. That's why a third-party fact checks are important.


I have a complicated history with this viewpoint. I remember back when Wikipedia was launched in 2001, I thought- there is no way this will work... it will just end up as a cesspool. Boy was I wrong. I think I was wrong because Wikipedia has a very well defined and enforced moderation model, for example: a focus on no original research and neutral point of view.

How can this be replicated with topics that are by definition controversial, and happening in real time? I don't know. But I don't think Meta/X have any sort of vested interest in seeing sober, fact-based conversations. In fact, their incentives work entirely in the opposite direction: the more anger/divisive the content drives additional traffic and engagement [1]. Whereas, with Wikipedia, I would argue the opposite is true: Wikipedia would never have gained the dominance it has if it was full of emotionally-charged content with dubious/no sourcing.

So I guess my conclusion from this is that I doubt any community-sourced "fact checking" efforts in-sourced from the social media platforms themselves will be successful, because the incentives are misaligned for the platform. Why invest any effort into something that will drive down engagement on your platform?

[1] Just one reference I found: https://www.pnas.org/doi/abs/10.1073/pnas.2024292118. From the abstract:

> ... we found that posts about the political out-group were shared or retweeted about twice as often as posts about the in-group. Each individual term referring to the political out-group increased the odds of a social media post being shared by 67%. Out-group language consistently emerged as the strongest predictor of shares and retweets: the average effect size of out-group language was about 4.8 times as strong as that of negative affect language and about 6.7 times as strong as that of moral-emotional language—both established predictors of social media engagement. ...


True, but that doesn't discount that it's a win for Meta

1) Shouting matches create more ad impressions, as people interact more with the platform. The shouting matches also get more attention from other viewers than any calm factual statement. 2) Less legal responsibility / costs / overhead 3) Less potential flak from being officially involved in fact-checking in a way that displeases the current political group in power

Users lose, but are people who still use FB today going to use FB less because the official fact checkers are gone? Almost certainly not in any significant numbers


Yeah, I agree it's a win for Meta from a $$ perspective, just not for the reason the OP expressed (which was what I was disagreeing with). \

OP said it's a win for meta because it creates more engagement, which is a proxy for $$

But "fact-checking" by people in authority is OK? Isn't that like, authoritarian?

"Fact-checking" completely removed the ability for debate and is therefore antithetical to a functional democracy. Pushing back against authority, because they are often dead wrong, is foundational to a free society. It's hard to imagine anything more authoritarian than "No I don't have to debate because I'm a fact-checker and by that measure alone you're wrong and I'm right". Very Orwellian indeed!

Additionally, the number of times that I've observed "fact-checkers" lying thru their teeth for obvious political reasons is absurd.


> But "fact-checking" by people in authority is OK?

it's by third-party journalism organizations, not Meta employees, so not "people in authority"


They are given the title of fact checker, ending debate, this is the authoritarian part. It does not matter who employs them. If fact checkers were angels we wouldn’t have this problem. However fact checkers are subject to human nature just like the rest of us, to be biased, wrong, etc.. Do you think these fact checkers don’t have their own opinions? Do you think they don’t vote? Don’t lie?

You are assuming the people in social media are a representative cut of people in the society but what you will notice quickly is that this is not the case, just look at echo chambers.

If I am trying to debate the same fact on a far-right or far-left post, undoubtedly both will come up with the same discussion and conclusion - let's not lie to ourselves.

So for your claim to have any validity the requirement of a fair, unbiased group of people on all posts would need to be given (in the first instance, there are a lot more issues with this, just look at the loud people versus the ones not bothering anymore to comment as discussing seems impossible) and that is just de facto not the case and the reason fact-checking is indeed helpful.


Without some sort of controls in place, fact-checking becomes useless because it's subject to being gamed by those with the most time on their hands and/or malicious tools, e.g. bots and sock puppets.

You should look into the implementation, at least the one that X has published. It's not just users shouting back and forth at each other. It's actually a pretty impressive system

Its more naive to think a fact-checking unit susceptible to govt pressure is likely to be better. There will always be govt pressure in one form or another to censor content they doesnt like. And we've obviously seen how this works with the Dems for the last 4 years.

> They aren't explicit about it, but it's clear that pressure does not exist anymore

It's clear that the pressure comes now from the other side of the spectrum. Zuck already put Trumpists at various key positions.

> I think this is a big win for Meta, because it means people who care about the content being right will have to engage more with the Meta products to ensure their worldview is correctly represented.

It's a good point. They're also going to push more political contents, which should increase engagement (eventually frustrating users and advertisers?)

Either way, it's pretty clear that the company works with the power in place, which is extremely concerning (whether you're left or right leaning, and even more if you're not American).


Is it less concerning if Facebook only worked with one side of politics? How is reducing censorship a bad thing?

Who said anything about that?

> They also said that their existing moderation efforts were due to societal and political pressures. They aren't explicit about it, but it's clear that pressure does not exist anymore.

I didn't think it was any secret that Meta largely complies with US gov't instructions on what to suppress. It's called jawboning[1]

[1] https://www.thefire.org/research-learn/what-jawboning-and-do...


The pressure has just shifted from being applied by the left to the right. There is still censorship on Twitter, it is just the people Elon doesn't like who are getting censored. The same will happen on Facebook. Zuckerberg has been cozying up to Trump for a reason.

fb has been censoring left wing stuff and leaving fascists be since several years. This is just "like before, but even more" I think.

What is this based on? I see so many people shouting things like this, but there doesn't seem to be any basis for these arguments. They seem a bit useless and empty.

Experience.

Ah ok, nothing noteworthy

Better than the "I made it up" you use, no?

So glad FB abandoned moderation. Both of you guys (left and right) blame Facebook for censorship. What a thankless job. I'd throw my hands up as well.

If you care so much about it, now you can contribute with Community Notes. The power is in your hands! Go forth and be happy.


You're right, censorship is same as lack of censorship.

Heh?

> reduce operating expenses

If you assume they are immune to politics (not true but let's go with it), this is the most obvious reason.

They've seen X hasn't taken that much heat for Community Notes and they're like "wow we can cut a line item".

The real problem is, Facebook is not X. 90% of the content on Facebook is not public.

You can't Fact Check or Community Note the private groups sharing blatantly false content, until it spills out via a re-share.

So Facebook will remain a breeding ground of conspiracy, pushed there by the echo chamber and Nazi-bar effects.


How would fact checkers access the 90% of private content? And should they? I don't think so, even if the respective private content is questionable.

The EU goes its own way with trusted flaggers, which is more or less the least sensible option. It won't take long until bounds are overstepped and legal content gets flagged. Perhaps it already happened. This is not a solution to even an ill-defined problem.


Yes. Those are all bad solutions. Banning social networks would be probably better.

Right, if you don't agree with people at an online community, these communities should just be banned!

You would be a good dictator.


Good. Private communication is private, even if it's a group. The nice thing about the crazy is that they're incapable of keeping quiet: they will inevitably out themselves.

In the meantime, maybe now I can discuss private matters of my diagnosis without catching random warnings, bans, or worse.


What kind of diagnosis spawns so many fact checks that it's a problem? I'd think any discussion about medical issues would benefit greatly from the calling out of misinformation.

Amusingly enough, it's not misinformation being blocked or called out, it's just straight up censorship of any mention of the topic.

The trouble with fact checkers was quite evident in the Trump-Harris debate.

As a Harris supporter, I actually agree, I think it was way too heavy handed and hurt Harris more than helped. I’m not sure anymore what the goal of fact checking is (I’ve always felt it was somewhat dubious if not done extremely well).

Any fact checker is going to be inevitably biased. For a debate, there should be two fact checkers, each candidate gets to pick a fact checker.

That could lead to a debate between the fact checkers, which would derail the debate.

Better to not have fact checkers as part of the debate, and leave the fact checking to the post-debate analysis.


Agreed, I always felt like most of the fact checking that has become vogue in the past ten years is designed to comfort the people who already agree, not inform people who want genuine insight.

If you don’t have fact checkers, a debate loses all its value. Debates must be grounded in fact to have any value at all. Otherwise a “debate” is just a series of campaign stump speeches.

The value in a debate is the candidates can directly address the opposition's claims.

Theoretically, yes, but when every second sentence is a lie it becomes impossible.

They routinely do just that in campaign stump speeches.

Non-American here (i.e. did not watch the debate), what trouble became evident?

Were they fact-checking too much? Not enough? Incorrectly?


Only one side was fact checked.

Was it the side that did the vast majority of the lying?

Yeah, the problem is that if one side tells 100 lies, and the other tells 1 lie, you can't correct all 100 lies, but if you only correct the most egregious lies then statistically you'll only be correcting the one side, and if you correct 1 lie from each side, then you make it seem like both sides lie equally. The Gish Gallop wins again.

Especially for live fact checking the greater the number of lies and the more obvious/blatant those lies are the more likely someone is to get fact checked.

We would have to fact check if those numbers are correct.

Oh wait, fact checkers don't work, better just inform yourself and make up your own mind, and don't just believe some supposedly authoritarian figures.


This is the problem, you are clearly biased. She brought up the Charlottesville issue that has been widely debunked; it is blatantly false and well-known to be false. She was not fact-checked. That's the issue.

Only one side made claims like it being legal to abort babies post-birth.

[flagged]


This is a bit like the movie posters that quote "best movie of the year" when the full quote is "not the best movie of the year".

Go back a sentence.

https://www.reuters.com/article/world/fact-check-virginia-go...

> “where there may be severe deformities. There may be a fetus that’s non viable” he said. “If a mother is in labor, I can tell you exactly what would happen.”

Your dying grandma may go DNR, but that doesn’t mean murdering grandmas is broadly legal.

My wife does charity photography for https://www.nowilaymedowntosleep.org/. You see lots of this sort of withdrawal of care. Calling it an abortion is cruel and dumb.


Yes, this just reads like "oh, thank God for that, that department was an expensive hassle to run".

I don't know if I'd call it a certain win for Meta long term, but it might well be if they play it right. Presumably they're banking on things being fairly siloed anyway, so political tirades in one bubble won't push users in another bubble off the platform. If they have good ways for people to ignore others, maybe they can have the cake and eat it, unlike Twitter.

Like Twitter, the network effect will retain people, and unlike Twitter, Facebook is a much deeper, more integrated service such that people can't just jump across to a work-alike.

A CEO who can keep his mouth shut is also a pretty big plus for them. They skated away from bring involved with a genocide without too many issues, so same ethical revulsion people have against Musk seems to be much less focused.


Community Notes is the best thing about Musk's Dumpster fire.

The problem with CN right now, though, is that Musk appears to block it on most of his posts, and/or right-wing moderators downvote the notes so they don't appear or disappear.


I am not so sure that Musk or right-wing moderators are directly to blame for the lack of published community notes. My guess: in recent months, many people (e.g., me) who are motivated to counter fake news have left Twitter for other platforms. Thus, proposed CNs are seen and upvoted by fewer people, resulting in fewer of them being shown to the public. Also, I ask myself: why should I spend time verifying or writing CNs when it does not matter - the emperor knows that he is not wearing any clothes, and he does not care.

> the emperor knows that he is not wearing any clothes, and he does not care.

Indeed the ending of the famous story is:

> "But the Emperor has nothing at all on!" said a little child.

> "Listen to the voice of innocence!" exclaimed his father; and what the child had said was whispered from one to another.

> "But he has nothing at all on!" at last cried out all the people. The Emperor was vexed, for he knew that the people were right; but he thought the procession must go on now! And the lords of the bedchamber took greater pains than ever, to appear holding up a train, although, in reality, there was no train to hold.


Community notes launched at the start of 2021. It predates the buyout by almost two years.

If what they said about their design is to be believed, political downvoting shouldn't heavily impact them. I wish it was easier to see pending notes on a post though.


Right, I think that's the parent's point: CN is a great design, dragged down by the fact that Elon heavily puts his thumb on the scale to make sure posts he likes spread far and wide and posts he dislikes get buried, irrespective of their truth content.

This. You're getting downvoted as bad as me LOL

I agree, you should be able to see pending notes even if you're not a CN moderator.

You can see them, it's just that finding the button to do so on a post is difficult. I think you need to navigate to the post from the notes section of the website.

The bad faith “NNN - just expressing an opinion” is a cancer on CNs too.

To be fair, a lot (not all) of notes on Musk's posts are spurious, including the NNN's. It's clearly being misused there, but in general they seem to work very well indeed.

> content moderation is complicated, error-prone, and expensive

I think the fact-checking part is pretty straightforward. What's outrageous is that the content moderators judge content subjectively, labeling perfect discussions as misinformation, hate speech, and etc. That's where the censorship starts.


How do you avoid judging actual human discussions subjectively? I remember being a forum moderator and struggling with exactly the same issues. No matter what guidelines we'd set, there'd be essentially legitimate discussions that were way over the line superficially, and on the other you'd have neo-nazis acting in ways that weren't technically bad, but were clearly leading there.

Facebook moderators have an even harder job than that because the inherent scale of the platform prevents the kinds of personal insights and contextual understanding I had.


My answer is don't. If something is subjective, then why bother? "Words are violence" is such a bullshit.

Okay, but you're saying this on a platform where the moderator (dang) follows intentionally vague and subjective guidelines, presumably because you like the environment more here than some unmoderated howling void elsewhere on the Internet.

The quality of the platform lives or dies on the quality of these decisions. If dang's choices are too bad, this site will die.

The situation is somewhat different between a niche community and a borderline monopoly. But it's also true that facebook's success depends on navigating it well. At the end of the day we can choose to use it or not.

To the extent that people feel forced to use a platform that's a reason to further bias away from suppressing free expression, even if the result is a somewhat less good platform.


You're still making subjective judgements wherever you draw the line. I don't know how a platform could avoid making subjective judgements at all and still produce an environment people want to be in.

Good point, and thanks. I have to admit I don't have a good answer to this. Maybe what dang needs to assess can be better defined or qualified? Like we can't define porn but we know it when we see it? On the other hand, assessing something is offensive or is hate speech is so subjective that people simply weaponize them, intentionally or unintentionally.

> we can't define porn but we know it when we see it?

But we don't, though. Or rather, there's broad consensus over most of it, but there's plenty of disagreement over where exactly the dividing line is.


> That's where the censorship starts.

It also starts when there is no third-party anymore. Where is the middle line?


I do not follow, I do not believe this is correct. Third parties introduce the censorship.

I thought there would be community notes. And how would third-party work? The Stanford doctor was banned from X because he posted peer-reviewed papers that challenge the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.

> The Stanford doctor was banned from X because he posted peer-reviewed papers that challenging the effectiveness of masks (or vaccines)? I certainly don't want to see that level of hysteria.

Not familiar with that specific case, though generally I'm not a fan a bans. Fact checks are great though. There have been peer reviewed papers about midi-chlorians too (https://www.irishnews.com/magazine/science/2017/07/24/news/a...), but I'd sure hope that if someone brought it up in a discussion they'd be fact checked.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: