Hacker News new | past | comments | ask | show | jobs | submit login
Facebook overrides fact-checks when climate science is “opinion” (arstechnica.com)
236 points by rbanffy on July 21, 2020 | hide | past | favorite | 245 comments



I remember being very concerned that the fact checkers wouldn't be operating in a vacuum and we'd have a bunch of facebook execs being arbiters of truth.

And that appears to be exactly what has happened. This is terrifying. A handful of mainstream publications and a bunch of basically unknown facebook folks are now the censors. We've already seen well known EFF activists rated untrue for their arguments against warrantless surveillance, and now the censors already can't agree on climate change. I've seen arguments about Confederate generals rated as false because of modern news events - in ways that disagree with well tenured academic historians.

We seem doomed in this regard.


The problem is that there is a well-funded war on facts.

This isn't new either; the legacy version was the systematic denial that smoking caused cancer. This has been going on since the 1950s, and is still being litigated. https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3490543/

The result of this is that any single point of fact-determination becomes a target. It's good of Facebook to take some kind of stand against the most fraudulent and racist articles that get circulated on there; the "blood libel" against Jews and so on. But now they're dragged into the other conflicts. There will be a lot of money available from Exxon-Mobil to deny climate change, for example.

Ultimately the concept of "fact" as something outside subjectivity will be destroyed. The rotation of the earth around the sun is really just your opinion, right? /s If heliocentricity were politically convenient or profitable there would be an astroturf campaign promoting it on facebook.

(A subgenre of this is libel lawsuits, which are much less of a problem in the US due to "anti-SLAPP" laws: https://www.medialaw.org/topics-page/anti-slapp?tmpl=compone... but may be a significant problem in the UK; e.g. https://www.theguardian.com/media/2009/dec/17/bbc-trafigura and https://en.wikipedia.org/wiki/McLibel_case , lasting over a decade)


Ideally, facts could assert themselves. Unfortunately, humans are needed to assert facts, and therefor any judgements based on facts require human interpretation.

We can dissolve interpretations down as far as we like to leave as little room for bias and dishonesty (and that is a useful effort), but ultimately there will always be POWER in the hands of those who we abdicate arbitration authority to.

...and power corrupts.

The alternative is creating a diverse ecosystem of organizations/individuals who's analysis/logic people can choose to subscribe to and judge based on the reputation that they garner over time.

...and diversity of thought and analysis liberates.


> The alternative is creating a diverse ecosystem of organizations/individuals who's analysis/logic people can choose to subscribe to

Allowing people to "choose" to subscribe to certain things has the ability to get them and others killed. That's why I mentioned the blood libel, while desperately casting around for something that people on HN wouldn't pointlessly dispute as subjective.

The bar for "this is genuinely dangerous" should be very high, though!

(the question of "what is a fact" could easily absorb a large part of an undergraduate History and Philosophy of Science degree, but we can all agree that there are certain things which are provably not facts?)


This is normally why we have limits on behavior like using violence to push viewpoints. You can't reason someone out of something they didn't reason themselves into, but we also know that if you try to suppress them then they'll double down on their viewpoint. The best way to push a cause is to have a martyr.

There is no person fit to play the censor. Power corrupts. What starts as an intent to stop climate change denialism ends with being false about warrantless surveillance. What starts with trying to limit hate speech frequently ends with picking favorites.


OK, so... now what?

You've correctly identified a number of practical problems with combatting misinformation. However, this does not change the fact that misinformation exists and does real harm.

So if you can neither change a person's opinion, nor keep them from influencing others, what would you propose we do?


There are costs to living in a free society. Harm from letting people believe falsehoods is one of them. Soldiers dying to protect you from those seeking to destroy you is another.

If people aren't free to believe false things and even to try to spread those falsehoods to others, then it's no longer a free society. And one day, a thing you know to be true will be deemed false and you might still think that the problem is just that the wrong people have this power.


"some of you may die, but this is a sacrifice I'm willing to make."

Like, are you aware how high those costs can actually get? Who decided that the costs are worth it? Like, would you say the costs of leading someone into drinking bleach will be worth it?

I absolutely support the right for everyone to have an opinion, but we have to acknowledge that now all opinions are equally valid - otherwise, were regressing from several centuries of empirical science.

So at the very least, we have to give people the tools to distinguish opinions with a solid scientific basis from unqualified ones or ones given in bad faith. Fact checkers are one tool in this.


>Like, would you say the costs of leading someone into drinking bleach will be worth it?

Not to take sides, but one could argue the costs of the statement you're talking about are insignificant.

The assumption would be that anyone who is that easily influenced to do something that severely harms themselves is subject to many such influences and therefore adding or subtracting a single one doesn't change the probability of disaster noticeably in the long run.

If a straw can break a camel's back, does that mean burning a million straws saves a million camels?

...but if it can save just one...


Teach logic. ...at a young age. Fundamental logic and reason.

That is the first step. Most of the dangerous content online is visibly illogical, and driven purely on emotion.


Yes, and we're all susceptible to emotion to some degree, no matter the age and no matter the education.

There has been extensive research done in public relations, advertising and consumer psychology how to manipulate people emotionally and make them act against reason.

A bit of training in logical thinking is definitely not enough to counter this.


We're living in the age of emotions. Reason and logic are dead.


They are alive and well. Don't confuse Twitter and social media for the real world.


This is extraordinarily well put.


> the question of "what is a fact" could easily absorb a large part of an undergraduate History and Philosophy of Science degree

Is it really that complicated? It seems like, at least if you subscribe to a certain definition of fact, it becomes straightforward.

Example Fact: I observed that in a sample of X smokers who smoked Y cigarettes a day, Z developed lung cancer.

Example non-Fact: Smoking causes approximately Z rate of lung cancer.

The fact is about an observation that definitely indisputably happened. The non-fact is an extrapolation which is probably close to accurate given sufficiently good studies, but is nevertheless an extrapolation from facts rather than a fact itself. "Gravity is just a theory" and all that.

By this definition of fact, there are actually relatively few facts of general interest. Most of what we're interested in are theories we derive from facts which is where interpretation comes in. There may have been other confounding factors in the study, but as far as the study results are concerned, those were definitely what was measured.

However, my question is genuine. Is this a simplistic view of what a fact is, and if so, why?


And then, assuming you can get people to agree on the accuracy of the observation and on the causation, people can still argue about the best course of action to address a problem. Assuming people don't agree on causation, you might never get enough trials of solutions to learn anything about what a best solution might look like.


Whether or not people agree on something is completely irrelevant to whether or not it is a fact. Earth will orbit the sun, no matter how many people may or may not be comfortable with it.

Finding out what is and what is not a fact can be extremely difficult. However, we've spent the last few centuries developing tools to tackle this problem in the for of the scientific method.


The comment I was replying to pointed out that accurate observations are indisputable facts. To paraphrase the next part, determining causation from observations can be a matter of fact (where you understand the mechanism of causation very well), or of deciding that we're satisfied that the experimentation we've done demonstrates causation (which is not at all the same thing as proving causation). We can also do something like offer a plausible mechanism for causation, which, while often useful, is not particularly close to a fact in the vein of the Earth orbiting the sun.

You're obviously correct that belief doesn't make something a fact, but what to do when a fact hinges on a study? Who decides if the study is good enough? Obviously there is an objective truth, but is the study enough to find it?


> what to do when a fact hinges on a study? Who decides if the study is good enough? Obviously there is an objective truth, but is the study enough to find it?

Good point. This is definitely not trivial or easy - but this exactly the problem that empirical science has been developing tools for.

Technically, a theory cannot be "proven" - at some point, there could always appear another Einstein and overturn everything - but there are still theories which stay consistent with all observations - i.e. "facts" - for an extensively long time. The standard model of physics is such an example: It's not "proven", but for decades any attempt to disprove it has failed and an enormous amount of working technology has been developed by assuming the model accurately reflects reality.

In a similar way, a single study is probably not enough to make a new theory credible - however if the claims of the theory can be consistently reproduced in additional studies, this gives more justification to rely on it. (And more justification to be sceptical of critics)


> Whether or not people agree on something is completely irrelevant to whether or not it is a fact.

No, it's actually the ENTIRE meaning of "fact". Because "fact" is a word, and it has no meaning if we don't agree.


I'm sorry, I don't get it. Could you elaborate?


«Earth orbits the Sun» is not a very meaningful or scientific statement. «Frame of reference tied to the Sun is closer to an inertial one than the one tied to Earth» is better, but try explaining it to random person on Facebook.


What's wrong with "Earth orbits the sun"?


For nitpicky physicists, all objects in a system orbit each other, so the Earth as well as the sun and other planets orbit a common centre (which happens to be inside the sun).

For more obtuse people, they might nitpick that the system should include the entire milky way (so the earth as well as the sun rotate around the centre of the galaxy).


> Example Fact: I observed that in a sample of X smokers who smoked Y cigarettes a day, Z developed lung cancer.

> Example non-Fact: Smoking causes approximately Z rate of lung cancer.

Apart from the specific problems we have ever proving causality. both these statements are still conclusions based to varying degrees on beliefs which may be wrong. The people in the study might have lied. The cancer test might have been flawed. Ultimately the feeling of knowing a fact is an emotion. In science all we can do is be as careful as possible in stating how an experiment or study was performed and what happened. People certainly gain in their confidence of these broader statements as the evidence piles up. But from the perspective of science this reliance on beliefs created by previous studies is a form of bias which can be a bad thing.


I'm not clear on what you intend as a proposed definition, so to help get us on the right track, I'll take a few stabs at distilling down your post to what I think you might be suggesting. Is one of these close?

1. Facts are propositions that are indisputable.

2. Facts are propositions we are certain are true.

3. Facts are directly observable.

4. Facts are empirically verifiable.


Yes. For one, you don't take into account outright lying about results (which is shockingly common) - also by your definition we can never really agree on anything.


> Allowing people to "choose" to subscribe to certain things has the ability to get them and others killed.

I shouldn't be allowed to choose what I think?


> can all agree that there are certain things which are provably not facts?

In the abstract, sure, presuming your interlocutor shares a common enough epistemic framework, but good rhetoricians can craft slippery arguments.

Instead of "anthropogenic climate change is bollocks and climatologists are all paid-off liars," how about "climate change models have been shown to have errors". The first statement itself is bollocks, but that latter is true (if also misleading).

It is unlikely but possible the moon is made of cheese. Cheese is delicious on crackers. If astronauts ate moon-cheese, it would be delicious on crackers.


Certainly not every Jew would appreciate it if you argue that people need to be shielded from their opinions for their own safety.

> The bar for "this is genuinely dangerous" should be very high, though!

That is very gracious of you.


Not allowing people to subscribe to facts can also get people killed. How many had to die pointlessly before the Germ theory?


So, how would you decide on an actual action in this system?


Consensus.

Which means promoting education and intelligence is critical to having a functioning government.

...and the first rule the consensus should pass is that experts in the field should provide the consensus within their domain area. ie. Climate Scientists should decide whether Climate Change is real (as a matter of policy) - not the general public.


> ...and the first rule the consensus should pass is...

I mean you've just declared your own opinion to be the consensus.

I too can design a system of governance that works via "have a spirited debate in which every opinion is valid, then do what I say" :)


If you step off the edge of a cliff, the fact of gravity will assert its existence on you whether you choose to subscribe to it or not.


But gravity is an explanation of what we know will happen. You can believe you will fall to your death without needing to believe in gravity. Classical mechanics is enough to model the situation accurately enough but also incorrect. We don’t have a unified model either. A lot of our facts in regard to explanations of observed phenomena are much more loose. Rather they are our best explanations. Don’t confuse the map for the terrain.


I agree with your post in general but consider that accuracy and being correct is hard and takes time and consideration. This is something almost no people value when speaking in normal conversations or the equivalent online venues (Twitter, Facebook, reddit, HN, etc.). Take your post, for example, with its incorrect use of "heliocentricity". It means an astronomical model in which the Earth and planets orbit the Sun at the center of the Solar System. It is the accepted theory of planetary motion for centuries and has many different ways to be independently verified :). Not the best example of something incorrect that might be promoted with astroturfing.


> I agree with your post in general but consider that accuracy and being correct is hard and takes time and consideration

This seems like an even stronger argument for Facebook to defer to the independent fact checking rather than overruling it


Yup. This is why I actually agree with Zuckerberg on his “no censorship” approach.

He’s smart enough to recognize that taking on the burden of what should be censored and what shouldn’t is a quagmire he’d prefer to avoid. No matter how “good” your filters are, you’re going to piss off a massive group of customers, despite all the effort.

It’s just a massive time sink with little to no benefit.

“The only way to win is to not play at all.”


I think it's fine for Facebook to opt out of fact-checking, as long as they also opt out of algorithmic syndication of content.

If my feed is going to have posts shared by others (even non-friends) or, worse, op-eds in it, then FB is vouching for whatever I see. If they share a link with 50 million people, they're certainly responsible for the consequences.

Worse is the ad side, where Facebook wants to continue to profit from well-funded wars against science while still claiming to be neutral.

If you get paid to broadcast a message and then broadcast it, you are not neutral.


Bingo. You can make an argument that your posts are your free speech. But "What Facebook chooses to put on your feed" is Facebook's speech.


Agree completely.

Of course this is exactly how Facebook want it. To appear as the light touch impartial platform where "everyone is free" to communicate. Except in the background they play with who sees what to increase engagement and run experiments.

With the black-box of algorithms used to target content we can only guess what is really happening. But as every media company knows, controversy sells. So I'll need a lot of proof that generating controversy isn't exactly what Facebook is doing on their platform.


How does this argument apply to Google? When you search something e.g. is climate change real, Google ranks the results algorithmically. If Google doesn't do fact checking, are they liable for false statements?


> When you search something e.g. is climate change real, Google ranks the results algorithmically.

On hot-button topics, that's simply not true. Google maintains manual rankings for all sorts of things, and especially manually maintains search suggestions so as to not lead people down the "wrong" path.


It applies just as much because it is a fundamentally technically illiterate argument. It already ignores that the whole point of the algorithim is to prioritize. In this case just as much means "none" for both because it is asking for an outright contradiction to - provide prioritization without any bias. Or bias without bias. Nonsensical stupidity. The mobster with a cudgel making vague threats rhetoric is just another form of the stupid and wrong meme of service and platform being mutually excluse.

I personally try to do my part by stomping on this dumb sentiment like a cockroach whenever it pops up.


I disagree. The "algorithm" is fundamentally an optimization problem and companies get to choose what the fitness function for each algorithm is. Facebook chooses "engagement". This is not an accident nor some fundamental property of math. It's a product decision made by a human. One that can be changed.

And there is a fundamental difference between Google's algorithm and Facebook's: control. Google requires user input through search, Facebook decides what shows up when you open the app/site based on an algorithmic feed. In one case the user has some idea that what they're doing is interacting with an algorithm and they can change their search terms if they're not satisfied with the results, in the other the algorithm is in the background making decisions without making itself obvious.


There is an implicit search on Facebook though, namely the personal information you’ve entered into your profile and your relationships.


You may be misunderstanding. In this case, I think people conceive of an "obvious" baseline to compare to - presenting posts of your friends in a chronological order.

There's no inherent contradiction when people say "this simple, transparent, obvious algorithm is better". Even if implementing it would in fact result in forgoing billions of dollars.


well said


> I actually agree with Zuckerberg on his “no censorship” approach.

But this is a complete nonsense! Facebook cheerfully and aggressively censors anything sexual, far over and above local concerns of legality. Even before FOSTA/SESTA.

(example list from 2018 of censored FB content rules: https://www.cnbc.com/2018/04/24/facebook-content-that-gets-y... )


The grey area and room for debate is much greater for truth than nudity and sexuality.

zuckerbergs policy preference was no censorship of fake news, not no censorship whatsoever.


> The grey area and room for debate is much greater for truth than nudity and sexuality.

That's your subjective opinion. Different cultures will significantly disagree with you (and with the Zuck) on what the gray area for nudity and sexuality is.


Different cultures may disagree on what the policy should be, but Zuckerberg has the ability to create nudity rules that are objective, whether you agree with them or not.

For example, FB can make a policy that says no exposed nipples for females. There may be different opinions on the value of such a policy, but not the execution/implementation.

This is much more difficult for a policy stating that content must not be "misleading". Fact checking is much more problematic, because there can be support for either interpretation, nuance, as well as scientific uncertainty.


The problem in the linked article doesn't seem to be that the rules were ambiguous. Instead it seems as though they were perfectly clear but overruled by leadership because of external pressure. Plus there's this:

> Last week, however, ProPublica reported that explicit misinformation about voting remains rampant on the platform.

Voting misinformation is not ambiguous in the way you describe, yet Facebook doesn't care.


I think we would have to discuss the examples given by propublica[1]. Almost everything I saw was a matter of opinion.

"Your democracy, your freedom is being stripped away from you, and if you allow that then everything this country stood for, fought for, bled for is all in vain"

"the democratic party wants to rig our elections by forcing all voters to vote by mail. Vote by mail WILL lead to fraud"

"“Flooding the nation with ballots that can be stolen, sold, discarded, and forged—THAT’s the path to Leftist victory in November.”

“You think coronavirus is a crisis, wait till you see the voter fraud epidemic we have here in California. And mark my words it’s heading to your state like a diseased bat out of hell.”

“If you mail in your vote, your vote will be in Barack Obama’s fireplace.”

[1] https://www.propublica.org/article/outright-lies-voting-misi...


> For example, FB can make a policy that says no exposed nipples for females. There may be different opinions on the value of such a policy, but not the execution/implementation.

Your policy is still ill-defined, though. Do you mean ALL female nipples, or just human ones? Does the ban extend to babies? Who’s going to police that, and how?

Radiolab did an episode on all kinds of Facebook rules, covering the nipple conundrum as part of it. There’s very little that is straightforward about implementing any of them.

https://www.wnycstudios.org/podcasts/radiolab/articles/post-...


I agree there are edge cases. Are you saying the nipple issue is harder than moderating politically contested statements of fact regarding the nature of the world we live in?


I thought you were saying that “universal rules for nudity (subcontent: nipples)” are easier to write than “universal rules for fact-checking.“ If so, I disagree. They are either:

a) equally hard, given the common understanding of what “facts” mean nowadays, or

b) nudity is harder, given cultural contexts.

Culture makes a lot of difference for what’s considered nudity (e.g. women breastfeeding goats, as illustrated in the link), and it should matter less for statements about average temperatures for a given region, etc. But the “facts” as presented are contested anyway, since they are derivations based on data + interpretation, even if the interpretation is from >95% of scientists in the relative field.

That makes me lean more towards (a) even though I wish it was (b).


>I thought you were saying that “universal rules for nudity (subcontent: nipples)” are easier to write than “universal rules for fact-checking.“ If so, I disagree. They are either:

I think we still have a misunderstanding. I am saying that enforceable rules for nipples are easier to write, regardless of the cultural approval of those rules. A well written rule can be enforced by only looking at the content that is on the platform ( eg the image). That is not to say people might disagree (its Art!) or they might get it wrong (its a male nipple).

When it comes to statements of fact, this is not the case. The enforcers must draw conclusion based on off platform research, a presumed intent of the post, and/or a presumed interpretation.


I think I’m with you now. The nature of a factual claim is that it refers to other things that must be examined in order to determine if it passes or fails a rule, whereas for many kinds of images there is no external reference that has to be consulted: the content is all right there in the image itself.

In which case I’d have to agree! Thanks for taking the time to clarify it for me.


Exactly! I would say that many other common rules follow the feasibility of images. For example it is easy to test if a post is doxxing, contains swear words, ect.


I saw someone who was posting close-ups of nipples without specifying a gender to make the point that it isn't really that simple.

Is a trans woman's nipple OK? What about a trans man? What if the trans man has large breasts? What if a cis man has large breasts? What about the tattoo nipple of a cancer survivor?

Nothing is simple, there are value judgements at every turn. The implementation is absolutely making judgements.


I still think it is easier than defining truth.

In all above cases you could provide the reviewer with an objective yes/no answer. The reviewer does not need a PHD to know if there is a nipple in a picture.

This is not the case for questions of scientific truth, which other requires understand the scientific consensus with complex nuances.


And of course that doesn't even get into the fact that Facebook isn't about whether you can post something, it's about how algorithms choose to share what you post with others. They can tweak things so your post shows up at the top of everyone's feed, or no one's at all.


not just anything sexual, but mere nudity on social media platforms seems to be controversial. I remember the "female presenting nipples" debacle on Tumblr including pictures of breast-feeding mothers.

"no censorship" apparently means, "don't anger American conservatives", not "everyone can speak and share".


It is probably pragmatic/conventional wisdom that it will kill mainstream adoption if they are openly linked to it. It is deeply stupid but I can't say it is incorrect.

Now they have a point with FOSTA - it was already risky when they don't know if a naked body is 18 or 17.

To be blunt stupid inputs be they customer norms or laws get stupid results - it is how systems work.


But he already plays by having an algo that suggest you articles/news/groups to join. It would be one thing if it was just a dump of what your friends post and nothing else. But the moment you "suggest" content to people, you take an editorial stance. If you take an editorial stance and start pushing an article to millions of people, then you have a responsibility.

You can't have it both ways. This isn't a phone company.


> It’s just a massive time sink with little to no benefit.

There is arguably a large social benefit to preventing widespread misinformation being disseminated. Perhaps no great business benefit - indeed it is likely a massive expense. But someone somewhere is paying that cost, however it manifests.


I'm sure he'd prefer to avoid it, but Facebook has become influential enough in people's day-to-day information consumption that at this point, doing nothing is itself a stance in favor of the well-funded and well-connected.


> Yup. This is why I actually agree with Zuckerberg on his “no censorship” approach.

The Rohingya massacre in Myanmar begs to differ.

https://www.nytimes.com/2018/10/15/technology/myanmar-facebo...


Facebook can’t have it both ways. They can’t tell Congress they are like the telephone company (an agnostic communication channel) and then turn around and censor whatever they feel like. Either take responsibility for the content or stay out of it.


The whole point of section 230 of the CDA was to say that they can.

Before 230, there had been court rulings that said if a platform moderated user supplied content they were liable for that content. Section 230 was written to reverse those rulings.

I'm not sure why, but a myth has taken hold that under 230 you have to be neutral to get the liability shield.


TIL. Thanks for pointing me to the legislation.


Maybe some people would actually like to repeal that though.



It's not a new myth but a new suggestion.

230 was written for a bulletin board site in '96.

Twitter and other social media behemoths still enjoy this protection.

There's been proposals to hold these behemoths accountable by taking away their liability IF they do not act neutral and meet a certain criteria (X users, X income, etc.).


"Accountable", there is that weasel word again for "do what we want or else" without even defining it.


That section doesn't allow you to lie to Congress, which is what gp was complaining about.

Facebook can defend itself in the court of public opinion by claiming to be neutral, but when they are in fact discriminating against certain viewpoints they end up facing critisism and potential boycots for that, regardless of whether it is legal.


This is a common misunderstanding that is often repeated despite being wrong. This is the exact opposite of Section 230, which says that you can have it both ways: You can moderate some but not all stuff without getting in trouble.


But this is exactly what the public seems to be demanding of Facebook - i.e. take responsibility for censoring your content, but only the content I disagree with. FB is in a no-win situation unless they can offload the content moderation to a government entity.


> FB is in a no-win situation unless they can offload the content moderation to a government entity.

Content moderation is something the US government very specifically can't do.


By force, but the US government can come out with guidelines and Facebook can choose to block posts that goes against those guidelines, essentially outsourcing their content moderation to the US government while facing only a small amount of the flack (Facebook does not have an opinion on the wearing of masks, we do not allow posts that go against the scientifically backed recommendations of AGENCY).


Granted, that is exactly how the government lends teeth to sexual harassment policies it would be legally unable to enforce itself.

But I don't think a policy of "nobody can dispute the government's official statements" would fly, even with the enforcement done by Facebook.


Possibly a government sanctioned third party “content moderation board” then? This is partly a cynical take on how FB can offload an impossible task to somebody else.


... And before that, there was local news and newspapers. Some time before that it might be your boss, your parents, or the local clergy.

This is no change, but business as normal. What has changed is that we actually have easy access to information outside of the keyholders. So when the government and some in society dismiss complaints about police brutality, we can have the retribution of actual video (an example with both historical and current examples), and it is so much harder to dismiss. Unfortunately, not everyone is willing to believe facts - but again, I'm not sure this is more true today either, since folks have been rejecting things like modern medicine and science and so on for generations if not centuries. We simply know about it more, thanks to - in part, for good and bad - facebook. And reddit and HN and every social place on the internet.


There's quite a difference between social media interjecting its opinion into your private, personal conversations and whatever the clergy at your local church back in the 90s would have said. They would not have even had visibility into say, what you might be talking about at the local pub or hanging out with your neighbors. A local newspaper might pitch a story, but editorials were clearly labeled and there were often competing newspapers in most towns. Statistically, police brutality isn't a problem and far more people suffer from the effects of too little policing than not (see murder rate in Chicago or Juarez). In this era, we suffer from magnification and amplification of the marginal. A thing that does not occur very often, statistically, can get blown completely out of proportion. The power of social media is the power of propaganda. Propaganda has been used throughout history to start wars, revolutions, religious battles, etc. It has the ability to shape the perception of reality in a way that has heretofore been unthinkable. Facebook alone has an audience and reach that no TV channel or newspaper has ever had. Quantity has a quality all its own.


The assertion that too little policing being a problem at large can be used to dismiss police brutality is disgustingly stupid and wrong. Police brutality isn't "too much" policing poured upon a location like floodwater but inherent crime and abuse of power and always a source of problems. Throwing police brutality into Jurarez is how you make the situation even worse and is part of how it got so bad in the first place!

No institutional trust in policing caused organized crime to form long before prohibition was an issue. Police brutality is a seed of crime. It isn't like trolling where attention creates the problem but a tumor where ignoring it makes it grow from manageable to insurmountable.


Are Facebook walls "private, personal conversations?" "Wall" implies they aren't.

Mechanically, they clearly aren't; they're passing through Facebook's servers and FB doesn't pretend to treat them as unfiltered content (in fact, the feed algorithm complaints are much older than the fact-checking complaints).


> A thing that does not occur very often, statistically, can get blown completely out of proportion.

9/11 only happened once, and killed far fewer Americans than American police have since then. Therefore terrorism isn't a problem.

(This argument isn't false, but it's highly misleading!)


I don't want to go down this rabbit hole but a lot of people believe that terrorism isn't a large problem. The outsized response to it has been.

Neither position is an absurd one to take.


Generally speaking, your attitude towards anything depends on your assessment of what will happen in the future if it is unchecked.

If you see a crack in a dam, and a rivulet of water coming through, you base your actions on whether you think it's going to become larger.

If a person is pretty certain that the dam is about to collapse, then talking about the water coming through right now seems palpably absurd, but reverse the assumptions and taking drastic action seems equally absurd.

There's no neutral way based on facts and independent of opinions to decide what's absurd, extreme, or hysterical. But at the same time, different views are different, and some are going to be severely wrong about anything controversial and important.

I don't know how you make a generic framework that tells you the right thing to do when people are talking about invading Iraq, and also tells you the right thing to do when people are talking about shutdowns for covid.


I do think there are at least two material differences today.

One is that you no longer need "physical access" to someone to manipulate them. It is so much easier to get your message to someone today. You don't even need to buy a costly newspaper or TV network. You can be individually targeted from anywhere in the world relatively easily. The scale is completely different.

Second is that attackers are able to target specific people in a specific way thanks to the data collected by us online (I mean, your clergy doesn't have access to your search history). Your local clergy has to put forth a message that both appeals to an entire congregation and also fits in roughly with what is expected to be heard from clergy. The congregation can then talk to each other and decide if the message makes sense. Attackers going after a specific person don't have to deal with either of these.


It is a change, because it's centralized and national in scope.

No single partisan local newspaper editor had the kind of reach that facebook has.


Even in cases of clear video of an event, two people may come to opposing conclusions about what the video actually depicts. An example I saw on HN recently was the video from Seattle of a car hitting and killing two protestors on a highway. One commenter believed the video depicted reckless but ultimately unintentional manslaughter, while another believed it depicted premeditated murder. I think they were both sincere in their opinions, but they both watched the same video and can't both be right.

Another example are the videos of the collapse of the WTC7 building. Many people view those videos and perceive an intact building being intentionally demolished. I view those same videos and perceive a building gutted by debris and fire collapsing under it's own weight. We watch the same videos but come to opposite conclusions.


In this context, an apt metaphor would involve one commenter believing that the people were not struck by a car, or that the building still stands...


I don't mean these to be metaphors to some other situation. I mean them to be real examples of the shortcomings of video evidence in determining objective truth. Both are examples that doubtlessly cause heated arguments facebook moderators are expected to moderate.


If that's the case, the problem isn't with video evidence, the problem is people don't know how to make logical arguments. Facebook didn't say they would be moderating discussion in that way.


I don't think it's just a matter of people being illogical, (although it's certainly the case that all people will be illogical at least some of the time.) How people interpret what they observe is fundamentally subjective.

> "Facebook didn't say they would be moderating discussion in that way."

Sure, I'm responding more to the notion of "Unfortunately, not everyone is willing to believe facts" My point is that video doesn't always make it clear what the facts are. It's not merely a matter of 'some people don't believe facts' because in many cases different people can view the same video and, with a large degree of confidence and in good faith, have different opinions on what the facts are.


If a person uses a video as evidence to conclude something that isn't evident in the video, it is not an issue about subjectivity. It's an issue with the person; they're being illogical, or acting in bad faith.


> something that isn't evident in the video

What is or isn't evident in a video is very often subjective. It's a mistake to believe that somebody who is certain of something which contradicts something you are certain of must be illogical or acting in bad faith. Two people can view the same video with different perspectives[0], perceive two sets of mutually exclusive facts, and from those contradictory perceived facts logically deduce contradictory conclusions.

[0] It's arguably impossible for two people to have the same perspective, since they have lived different lives and had different experiences.


What is or isn't evident in a video is very often subjective.

If this is the case, it's a mistake to try to put forward a fact as objective based on something that is "very often subjective". To do so is illogical or acting in bad faith.


> If this is the case, it's a mistake to try to put forward a fact as objective based on something that is "very often subjective".

Pretty much. If such a thing as truly objective facts exists, I (and any other human) can only perceive and understand it through through the imperfect tools available to us. The mechanisms through which we perceive and understand reality are influenced by our lived experiences; nobody is immune to this. Of this, I am certain.

Incidentally, one of the various things I am certain of is that some of the things I am certain of are actually wrong.

> To do so is illogical or acting in bad faith.

I disagree, I don't think that logically follows from the former premise. It may be the case that somebody arguing logically and in good faith simply failed to perceive facts which, had they perceived them, would alter their logical conclusions.

As a concrete example: many people who view the video of WTC7 collapsing seemingly fail to notice that in the upper left corner of the building, sunlight is visible through the windows after the rooftop penthouse collapses, immediately preceding the total collapse of the building. That sunlight is visible through those windows is a fact, by my perception. When I point this out to people who believe WTC7 was demolished, the response is often that they never noticed it before I pointed it out. It doesn't therefore follow that they were previously arguing in bad faith or illogically; what it shows is that different people viewing the same video can acquire different sets of facts. Different sets of facts may lead to different good faith and logical conclusions. A fact that you are unaware of cannot be apart of your logical analysis of matters pertaining to that fact; the best you can do is maintain a general awareness that there may be relevant facts you aren't aware of.


This reminds me of when people said that the DMCA would be used to silence critics, or DRM would be used to prevent people from fixing their own devices. In those other cases, too, the outcome was obvious and predicted, but they went ahead anyway.

We can only hope, pray, and take action[1] to try to prevent the same thing from happening with the government trying to force companies to build backdoors into encryption, a battle we have been fighting for literally decades[2].

[1] https://act.eff.org/action/stop-the-earn-it-bill-before-it-b... [2] https://en.wikipedia.org/wiki/Clipper_chip


Re: "This is terrifying. A handful of mainstream publications and a bunch of basically unknown facebook folks are now the censors."

It's not much different than when the TV news was controlled by 3 networks.


And in fact, there is a huge difference between this era and that era---people who don't like Facebook's tone can use a different site (or run their own) for much lower cost and fewer governmental barriers to entry than the FCC represents to the television industry.


Broadcast vs targeted.

Very different.


> I've seen arguments about Confederate generals rated as false because of modern news events - in ways that disagree with well tenured academic historians.

Do you have a source for this claim?


Let us be honest here. The people that have the job to sort out this stuff might also not be the brightest stars in the night sky. If I had vested interests where manipulation of social media nets me advantages, I would be very content with the situation.


> The people that have the job to sort out this stuff might also not be the brightest stars in the night sky.

Why not?


Maybe they are... I just thought they might want to do a different job.


In this case, the low-level moderators were apparently overruled by a self-describe “conservative” higher-up.


Yes, and I think whoever is responsible has a deeper understanding. People focusing on sanctioning people not believing in climate change are a larger hindrance to find support for climate policies than complete deniers.

I believe in climate change, but I am very skeptical why we need to censor people for speaking their mind. So if they had free reign, it would have been a net negative for everyone.


I go to pub with some facebook "censors". We are not doomed, but facebook as a platform is.


"I remember being very concerned that the fact checkers wouldn't be operating in a vacuum and we'd have a bunch of facebook execs being arbiters of truth."

This isn't the hurdle - the reality of nuance is the hurdle.

Any one of us would have tremendous difficulty in parsing through the reams of various ways people can comment on the world around them.

It's an impossible problem. Not even a team of people watching your every post would produced consistent results.

"The earth increases in temperature by 0.1 degrees last year" - says John. Well ... there are a handful of different ways of indicating the 'Earths Temperature' so we're already dead in the water.

"The winters are colder where I am so looks like 'global cooling to me'" - says Judy. Well, if it is getting cooler where this person is, it would look like the Earth is cooling to a person with that narrow frame of reference.

If it was a war of 'scientific voices' with references to specific articles etc. this would be one thing, but we're talking about FB: casual discussion and propagation of 'water cooler banter'.

I don't think anyone has a solution.


> We've already seen well known EFF activists rated untrue for their arguments against warrantless surveillance

Got a link about that?


From my understanding the story is:

* Patriot act is up for renewal

* Amendment proposed to not allow section 215 to be used to search internet history W/O warrant.

* Patriot act reauthorized W/O internet history protection amendment.

* PF rules "[Congress] voted for federal agencies to have access to your internet history without obtaining a warrant." is false because of semantic word games.

https://www.politifact.com/factchecks/2020/may/21/facebook-p...


Politifact seems extremely dishonest in this. Even with these semantic word games "[Congress] voted for federal agencies to have access to your internet history without obtaining a warrant." is still true.

Thank you a lot. Do you happen to have a source for "I've seen arguments about Confederate generals rated as false because of modern news events - in ways that disagree with well tenured academic historians" as well maybe?


Technically, that would be a great case to strengthen anonymity.


I'm curious about the confederate general thing, do you have a link? It's true there is gray area everywhere, I guess I have a hard time believing that Facebook would bother stepping into a genuine aspect of history, in an area that isn't really much disputed.


The Confederacy is hugely disputed. Certainly in relation to its intent, purpose, heroes, and legacy.


Not in a fact-checker-visible sense it isn't, though. We know who the actors were and what they did and how they wrote about it. The US civil war is one of, if not the, best-attested conflicts of its century. What is there to fact check? Like I said, I'm curious and suspect the real issue is a little deeper.


I'm not particularly familiar with the matter, but the actions and wirittings are still subject to interpretation.

Some examples might be:

What was the reason for the confederacy to declare independence.

What were General E Lees motivations for joining the war.

What was the motivation for most soldiers (who were not slave owners) joining the war.

You can find hundreds of books disputing these topics.


Once again, reasons and motivations aren't the subject of "fact checking" in the sense we're discussing. That's interpretive history, not factual stuff. And I'd be shocked beyond belief if Facebook was actually stepping in here, which is why I asked for a cite.

I'll be blunt: this didn't happen. They never fact checked a post about a confederate general, and if they did it was over some kind of obviously false thing that wouldn't be controversial.

Edit: OK, I found a situation: https://www.politifact.com/factchecks/2020/jun/12/facebook-p...

tl;dr: Facebook put a fact check warning on a piece that claimed in isolation that "Lee opposed both secession and slavery" without mentioning the fact that he led the confederate army in war against the United States or that he personally owned slaves. That seems about right to me.


Thanks for finding the article.

I would argue that this is clearly in the realm of interpretive history.

The core statement "Lee opposed both secession and slavery" is too poorly defined to have an objective answer. It is a question of what a historical figure thought and felt comes with all the nuance of a real human being.

Part of what I object to with the "fact checking" is that it often tries to adjudicate questions that are matter of opinion. Perhaps "oponion" should be the facebook flag.

Does placing the warning imply that the inverse is true and "Lee supported both secession and slavery".


> The core statement "Lee opposed both secession and slavery" is too poorly defined to have an objective answer

... ?

Doesn't the burden of factuality go the OTHER way though? It's an objective-sounding statement of what purports to be a fact. But it's belied by other obvious facts about the man's life. It makes him sound like a union abolitionist and not a slave-holding confederate, which is misleading.

I suppose you might (barely) get away with similar statements. For example "Lee opposed secession prior to the war, and claimed to oppose slavery later in life." That still leaves out context, but at least hints at its existence.

> Does placing the warning imply that the inverse is true and "Lee supported both secession and slavery".

No, it does not. But other statements might be: "Lee owned other human beings" is a clearly factual statement. "Lee fought against the United States" is another.


I think there mere fact that the two of us can read the same political article and then debate the interpretation of "opposition" in this context lends credence the idea it is a statement of interpretive history, and not fact. Similarly, it is open to the reader's interpretation. You read it as making him sound like a union abolitionist, I did not.

I totally agree that "Lee owned other human beings" and "Lee fought against the United States" are factual statements, and could be fact checked. So could statements that "Prior to the war, Lee wrote that slavery was evil" and "Both Lee and Lincoln would be considered a racist by modern standards, and had very similar opinions on slavery."

I agree that facts can lead readers to misinterpretations. Given how hard it is to verify actual facts, I don't think it should be Facebook's job protect readers from facts which are accurate, but could lead to undesirable world views.

In this instance, one group clearly wants to paint Lee as a moral man trying to navigate the political and economic realities of the antebellum south. Another group clearly wants to paint a picture of Lee as a symbol of southern racism and oppression. These are interpretations of history.


> You read it as making him sound like a union abolitionist, I did not.

Of course not. Because you know the context and have made your own decision to excuse his slave holding and treason when deciding whether he should be celebrated with giant statues. Which is your call to make, I guess.

But the point is that not everyone knows that context. And for those who don't, this advertisement, which is designed to affect public policy, is prima facie misleading because it presents two facts that are (at best) subjects of controversy.

You'd never allow this about other stuff you care about. Imagine a Facebook story with "The constitution gives you no right to own an assault rifle". Equally true, by your standards. Not really acceptable though, right?

> In this instance, one group clearly wants to paint Lee as a moral man trying to navigate the political and economic realities of the antebellum south. Another group clearly wants to paint a picture of Lee as a symbol of southern racism and oppression. These are interpretations of history.

And Facebook wasn't labeling considered interpretations. It was labeling something that looked like a fact. I don't think it's so heavy a burden for those who want to write a defense of Lee not to engage in the kind of spin in that story, is it?


> Of course not. Because you know the context and have made your own decision to excuse his slave holding and treason when deciding whether he should be celebrated with giant statues. Which is your call to make, I guess.

Ignoring the ad hominem, the context I had was the fact Lee was a confederate general and the article you linked. I approached it with skepticism, eg this is a nebulous and unverifiable claim. You it approached it assuming an uncharitable interpretation.

>But the point is that not everyone knows that context. And for those who don't, this advertisement, which is designed to affect public policy, is prima facie misleading because it presents two facts that are (at best) subjects of controversy.

This is only prima facie misleading if you ascribe to a specific interpretation and the facts challenge this this interpretation. You are starting backwards with a subjective opinion and then defining facts that do not lead to the same conclusion, as problematic. However, you will allow them if they are accompanied by enough additional facts for the reader to come to the same subjective interpretations as you.

My entire point is that if we want to control misinformation, limiting the presentation of "incomplete" fact patterns is a step too far.

>Imagine a Facebook story with "The constitution gives you no right to own an assault rifle". Equally true, by your standards. Not really acceptable though, right?

I have no problem with this. It looks like the title to 100 other garbage opinion pieces.

>And Facebook wasn't labeling considered interpretations. It was labeling something that looked like a fact.

Then it should be more careful to discern between verifiable facts and statements that look like facts.


This whole things stinks. Why does Facebook need to stick its nose in a viral post about a historical figure? I suppose the possible real-world repercussions of this "fact-checking" are not, like, genocide or anything, but the disposition of various Robert E Lee statues? Or is Facebook really trying to promote Truth as an end in itself? And they enlist Snopes to help them in this Noble endeavor and declare "mostly false" beyond their standards? Really? And you're defending this shit?


It's not just a "post about a historical figure" though, is it? I mean, if you want to spin a fictionalized yarn about Hammurabi I'm sure they'll allow it.

This was a post about a topical issue of public policy. Not Lee the man of 1865, but whether or not to remove a statue of Lee in 2020. Isn't that exactly the kind of thing that Facebook's fact check policy is designed to regulate?

You aren't allowed to lie (though to be fair this was a lie of omission) to influence a contemporary policy debate, and if you do they'll put a warning on your post that points to a clearer exposition of the truth.


It seems to me that the problem lies not in the fact that our information is unreliable, but that the social contract has broken down. If you want to foster an environment where honestly is viable, you have to start from the very foundations of society.


Facebook said they don't want to be the arbiters of truth, so they're allowing climate change deniers to prevent their posts from being flagged as false, which they are.

In other words, they're being arbiters of truth.


> A handful of mainstream publications and a bunch of basically unknown facebook folks are now the censors.

I share your concern. And, hasn't this been the case for as long as there have been centralized publications?


> This is terrifying.

Ironically, this news is terrifying if you have a very inaccurate view of human behavior. The following posts about "mindless masses" is so dismissive that I have to question the voracity of the analysis. People who use facebook are constantly faced with the dissonance of things found on facebook that do not correlate with reality. Even if FB was your one source of truth (there are no people who "only use facebook" or FB+Google, etc) they cannot inherently trust everything they read from day to day contradiction with their own community realities.


True, but since policies are directed to regulate social media, the question of relevance to the general population is entirely skipped.


Every news source is biased, don't rely on Facebook for news. It seems absurd that anyone would actually suggest otherwise.


Could you elaborate on your point about the Confederate generals?


Not OP but I can imagine an example might look something like this: (Please note below is not my view, but a hypothetical)

Professor of history/analytical view of history may label Robert E. Lee a skilled tactician.

Someone on facebook posts a Robert E Lee post saying he was a skilled tactician.

'Skilled tactician' seeming to be positive stance on Robert E Lee running counter to the viewpoint that, as a confederate general, he was pure evil with no positive qualities, the story then gets flagged as false despite the historians assertions that he was a skilled tactician.


That is why I recently said: Fact checking == Truth Factory.

Lets open a ministry for that, guess the name.


So long as there are other sources of information, the risk of one source having a Ministry of Truth is irrelevant.

In a vibrant marketplace of ideas, a "Ministry of Truth" is just "quality standards."


Uh, Ministry for Science?


Facts are a pesky thing, they end up being correct even when you disagree.


Nobody here is concerned about Facebook's ability to morph the fabric of reality -- just their ability to influence other people.


How is Facebook the arbiter of truth when many, many websites other than Facebook exist, and most human beings do not use Facebook?

This is a very different scenario than a single company owning every broadcast tower in a town, pre-internet.

The ability to access Facebook strongly implies general internet access, which is a compelling argument against Facebook exerting any sort of authority over general truth.


69% of people from the US use Facebook. In Europe, it's a little over 80%. Saying that "most humans" don't use it seems pointless at best, disingenuous at worst. If this platform becomes an arbiter of truth, it's clearly a massive problem across many huge portions of the globe.


The statement “most humans don’t use Facebook” is an objectively true statement.

So is “most people who have a Facebook account can also view almost every other site on the whole public internet”.


How many of them even realize that their worldview is potentially being shaped by the moderation teams of the tech they consume? And what alternatives exist to the major platforms (Facebook, reddit, etc)?

They're effectively a captive audience.


How is it that you figured out that you should read sites other than Facebook, and that the (presumably) mindless masses on Facebook have not?

What specifically makes them captive? Why do you believe that? Are the audiences of other extremely popular websites also captive? Why or why not?


>mindless masses

Perhaps you meant this hyperbolically but I do not believe it to be an exaggeration.

>What specifically makes them captive

They are captive because of a combination of their ignorance and the network effect. If all of your friends are on Facebook for example you'll have to leave them behind if you delete your account.


You can't legislate away ignorance.

In fact, if your argument is "most people are ignorant" it's an argument in favor of not letting their primary source of news be dominated by wealth and network effects instead of grounding in at least some objective reality (imperfect though it may be).

People who want "the real truth" have few impediments to seeking it out on the modern Internet. People who don't want to do their own fact-checking are better served by somebody doing it (and then we can keep a close eye on the watchmen).


> most human beings do not use Facebook?

Facebook has literally billions of monthly active users [0], close to two billion daily active users [1] and very likely a number of total accounts that's 10+ billion [2]. Instagram has another billion mau, and half a billion dau [3].

I struggle to think of any other private entity with that much global reach into so many peoples lives, the only other example that comes to mind is Google, the two of them combined have direct influence over 70%+ of internet traffic [4].

Staying clear of their impact is active work because even if you don't use their services, chances are many and often even most people around you still do.

[0] https://www.statista.com/statistics/264810/number-of-monthly...

[1] https://www.statista.com/statistics/346167/facebook-global-d...

[2] https://youtu.be/nmXACRrzLMA

[3] https://www.omnicoreagency.com/instagram-statistics/

[4] https://staltz.com/the-web-began-dying-in-2014-heres-how.htm...


Facebook is the 6th most popular website on the planet (behind Google, Youtube, and 3 Chinese sites). That puts them in a position of control over a great deal of the world's information diet. If they say something is false or true, many people will believe it. If they censor something, many people will never see it.


How many dead profiles are on there? I've just deleted mine, and in my circle of "friends" there were about 20 out of 350 that had updated their profile within the last year. Is there some kind of pareto distribution where Facebook has found its 'whales' that it can attract, and silently dropped all the other users, but kept their dead profiles?


Those are Alexa rankings, which count actual page views and time spent, not mere accounts.


How does that put them in control? Users still read at least five other websites.

Their popularity does not give them any ability to censor information widely, unless perhaps they collude with search engines to blacklist certain terms across all popular sites.

The fact that many people use Facebook is an indicator that those same huge groups also have access to every other website, which reduces the persuasiveness of the argument that Facebook gets to be the arbiter of truth, so long as all of those Facebook users have internet access that can access sites that are not Facebook.

If anything, the real arbiters would be the ISPs, who get to decide which sites you can even access, in real-time.


>Users still read at least five other websites.

Three of those are entirely Chinese and can be discounted from this conversation due to government. That leaves two sites run by the same company, all of which are headquartered in the same general area and have similar culture, and both have ongoing censorship/misinformation controversies.


Ok, what about all of the other websites on the internet?


The fact that other sites exist does not diminish the impact of the most prominent ones.


> How does that put them in control? Users still read at least five other websites.

I can't read Chinese. So discount those sites.


What a disgustingly terrible article. Never explains what precise facts Facebook is taking into consideration as opinion, just throws around political leanings and pointlessly polarizes it as "climate scientists" vs "climate change deniers".

This is the kind of journalism that only serves to divide the people even further by empowering their tribalism instead of trying to discuss the matter at hand.


More insidiously, it's another drumbeat for the idea that we need classical media gate keepers to invade people's personal space and "correct" any wrong-think. Very dangerous and unwelcome idea for a free society. The media has no valid purpose for having the power to amend private citizens conversations, we wouldn't tolerate at a bar and we shouldn't on our personal web space.


The greatest trick the devil ever pulled is making you think he doesn't exist.

Even without active censorship, Facebook very much is a gate keeper invading your personal space. It's just rather than building up walls around you, it's merely giving you a shovel and showering you with dopamine bursts every time you dig yourself down into your own personal hole. The end effect is the same, though -- all you hear is echos bouncing off walls around you. You just think the echos are only your own.

I agree a free society shouldn't tolerate amending private citizen's conversation. However, that comes with the responsibility of recognizing the more insidious ways of doing that, such as when your attention has been hijacked with industrialized behavioral profiling. The "private citizen conversations" you hear bouncing off your walls are being algorithmically seeded by whatever brings in the most dollars the most efficiently. That's not a truly free society, that's just a very subtle form of mass indoctrination.

The devil already exists out there whether you think he does or not. And when you play with the devil, the only winning move is to not play his game.


The real hillariously devious move is the devil does not really exist as a literal thing. Yet he causes people to carry out the agenda by not only scapegoating it for their selfish desires but from countereffective attempts to fight the devil. If he was real it would be easier to learn and recognize an actual bad actor and their mechanisms. That can't be done when it is fake. Attempts to fight are always a wild goose chase when building something good is the only actual way.

"The secret to my success is that I do not really exist."


Completely wrong. You do not have to at all use Facebook, or Twitter. They are a private website, and it is you and others who so choose to go onto their platform and post your own private information, and allow them to moderate your own private conversations. The problem is that all these people never should have allowed that from day one. I can 100% honestly say that I work professionally in network engineering and I do not and have never used Twitter, Facebook, or even a LinkedIn profile. They are not required, they are information silos and you've chosen to go into someone else's domain and host your private thoughts there for others to moderate. That's the problem. Create your own hosted website like the web is made for, and keep your conversation and your own personal control there. Period.


>More insidiously, it's another drumbeat for the idea that we need classical media gate keepers to invade people's personal space and "correct" any wrong-think. Very dangerous and unwelcome idea for a free society.

We desperately need some mechanism for correcting bad thinking based on falsehoods. You can believe whatever you want if your beliefs have no effect on me, but the world has become so complex and interconnected that it's very difficult to point to something and say "your opinion on this has no consequences for other people."

Vaccines? Herd immunity is effected. Not vaccinating your child has little impact on yourself, but if enough people do it, it effects the entire civilization. What should be done about this?

This article is about one of the biggest, climate change.

Every political issue is shaded with the brush of ideology, and that has come at the cost of actual facts and thinking. There is the old saying "strict adherence to rules is nothing more than an excuse to avoid thinking." The same thing can be said to ideology.

It is natural that the mediums we have for disseminating information should have responsibility for the integrity of that information.

But I am looking forward to hearing what you would suggest such a mechanism would be.


The best mechanism against bad ideas is to allow trustworthy people who can speak clearly and honestly about those ideas to do so.

The first amendment, in addition to freedom of speech, was supposed to enshrine the idea of a free press so that the two could keep the government in check. The press was supposed to allow honest trustworthy people to become known and to inform the public about issues of the day, and allow their honesty to be questioned and put on trial. The press chose a different path long ago, favoring one political party to an extreme degree where instead of the public's ability to test the honesty of politicians and pundits, we are simply to be told without question, mostly along party lines, what the truth is. Simply adding more power to the press will without a doubt make every problem in America much, much worse in the long run.


First of all, talking about "the press" as though it's some monolithic entity is just wrong. "The Press" is made up of thousands and thousands of uncoordinated organizations and people. There are still honest and trustworthy members of "the press" out there and available.

Just as each individual member of "the press" has a responsibility and duty to serve that role well, we also have a shared responsibility to hold them to that standard.

What we are facing here is a not just a failure of "the press", but a failure across society as a whole. We mostly don't want to do the right thing, which means putting aside our egos and focusing on what is actually true. We want to do the thing that feels good, which is getting confirmation of our current beliefs as righteous. "The press" is not responsible for this feeling, but some of them definitely benefit from it.

>allow trustworthy people who can speak clearly and honestly about those ideas to do so

And who, exactly, is going to make these decisions about who is "allowed" to speak or not? What, exactly, is that mechanism?


The question of who is exactly the point, the American ideal is all people get a voice. The more our voices are squashed by a powerful special interest, like the collective media and big tech, the more deranged our society will become. Everyone goes insane when left to themselves and group dynamics are no different. Whatever group of people you revere as 'good' the most, need to go through trials of their character by opposing voices or your favorite group is just getting more and more deranged over time. This is precisely why freedom of speech is so important, because wielding the power of 'who' can talk is far too powerful and far too easily abused.


The problem is that people are believing things that are factually false and that leads them to cause problems for the entire group.

Nothing that you described will fix that problem at all.

How, exactly, are we going to fix that problem?


Allow people to speak. To use your climate change example, climate change experts need to engage the public, even people they disagree with. The fact that they're not creates the appearance they are dishonest or have alternative agendas. Universities are the worst for this, they no longer allow free speech almost anywhere, and do not engage with anyone who disagrees with them once they've decided they have all the facts. Even if they're right, engagement is the only possible way to a society without oppression by the people who think they have the facts and the longer they avoid engagement, the more they will become deranged even if they do start out completely and factually correct.

Derangement is guaranteed in groups without full engagement of ideas and the media and universities (at large) are already at alarming levels of insanity in the ideas they're peddling, they've stopped engaging with critics long ago and now they want more power to silence those critics in their own homes. It's the definition of oppression masquerading as decency.


>The fact that they're not creates the appearance they are dishonest or have alternative agendas.

What are you talking about? They are constantly trying to engage.

Your comment about universities doesn't ring true to me either. Universities are where you're most likely to find free exchanges of ideas today. You can cherry pick counter examples, but that ignores the larger truth, and the reality of larger society.

People do not want to hear how they are wrong. People do not want to be wrong. Allowing people to speak does not fix this problem. It does nothing to this problem at all. It is, in fact, the current reality.

You speak of not engaging with critics, but what I've found is that the people who are ignoring basic facts about the world are the ones that are least likely to have honest exchanges about their own ideas.

Still, you have offered no such solution to that problem.


We seem to be at an impasse. You're saying they're constantly trying to engage, but I see something different. Reddit won't even allow "deniers" or anything that smells like a "denier" to speak. The scientific complex will absolutely ostracize anyone who tries to take any funding to research anything that could discredit the mainstream narrative. There is no public debate anymore, as Al Gore famously said, "The debate is over".

Anyway I've given what I believe strongly is the only solution, and ideas like that espoused in this article are dangerous and won't help you or me.


So we have a disagreement about the fundamental facts, and your solution is to stop the conversation.

Thank you for perfectly illustrating the problem I'm talking about.


> We desperately need some mechanism for correcting bad thinking based on falsehoods.

Let's burn the books promoting the bad thinking and then send the offenders off to re-education camps to be corrected.

Update: Edited for clarity.


Are you done editing your comment yet?

None of those options have been particularly successful, have they?

You're editing your comment so much, it's hard to know what to reply to. The bottom line is that I don't know what the answer is, and lately I fear there is no answer.


Here's the Daily Wire article (paywall, so here's a mirror):

https://environmentalprogress.org/big-news/2020/6/29/on-beha...

And here's a reasonable analysis of what it gets wrong: https://sciencefeedback.co/evaluation/article-by-michael-she...

Flagging it as misleading seems fair.

I cannot find the Washington Examiner oped in question.


The analysis is, ironically, guilty of the same cherry-picking it accuses the original article of. The checkers identify two false claims and two misleading ones out of Shellenberger's bulleted list of twelve "facts few people know". But the summary never engages with the article's core claim, that climate change won't be the end of the world or of human society even though many in the general public think it will. The first checker explicitly agrees with that claim!

There's also a larger question here. If claims which dispute scientific consensus are fact-checkably false, what does that mean for scientific debate? If Facebook had been created in the 60s, would Freudianism have been forever locked in as an effective form of psychiatry?


If this were a debate for or against a proposition, your criticism would make a lot of sense. Even if there's some false and misleading information used to support the proposition, the proposition can still be adequately supported by other information and arguments.

However, that's not the context of the analysis. The question isn't "Does this guy have a point?" It's "Does this article spread misinformation?" The analysis rightly concludes it does.


I'm not sure I agree. A claim that the article "mixes accurate and inaccurate claims in support of a misleading and overly simplistic argumentation about climate change" strongly suggests that the guy doesn't have a point. Perhaps more importantly, a "partly false" tag on Facebook suggests the same - fact check tags will inevitably be interpreted as a claim that the article's not worth reading.


The "reasonable analysis" is hilarious. E.g.: climate change is affecting wildfires, BUT fires have decreased. Says a Geography professor from Swansea.

This kind of stuff only serves to create more doubt about climate science and, unfortunately among some people, science in general.


Your summary of his comment is misleading because you omit his explanation of how both are true.

> “Fires have declined 25% around the world since 2003″.

> It is correct that the total global area burned has OVERALL declined over the last decades, BUT this is incorrectly used to argue that climate change is not affecting wildfires. The overall decrease is largely due a substantial reduction in flammable vegetation in African grasslands arising from human land use changes.


I'd hope HN posters would understand that systems aren't that simple and read the full content of that analysis.


Would be nice to know what the actual claims are. I'm not at all a climate change denialist, but some of the things claimed by people trying to raise awareness for climate change are outright false, or at least extremely unlikely, e.g. humans almost certainly aren't going to go extinct because of climate change. But it can still be catastrophic at a society level.

It's a shame the issue is so polarized, because while the claims that the earth is warming and that the warming is at least in part caused by c02 and other forms of pollution are solid, some of the assumptions going into the models are questionable and can be legitimately debated.


The problem is there are three groups: climate deniers, climate alarmists, and climate realists. The first two are much louder than the third, so the debate quickly turns into an extremist yelling match full of misinformation in the public arena. Most people never settle into the minority middle ground of climate realist, because it can be difficult to muddle through the extremist views to get to the truth. And even if you do, the majority of people will just make it a point to tell you how wrong you are anyway.


This trichotomy is not really helpful.

Here's a more helpful one: people who actually have expertise and peer-reviewed publications, people who at least pay attention to the above, and everyone else.


[flagged]


Thank you for, perhaps unintentionally, proving my point.


> The problem is there are three groups: climate deniers, climate alarmists, and climate realists.

Unless these labels are ones that people apply to themselves, you're being pretty unhelpful by labeling people.


Sounds like this is about Michael Shellenberger, the long-time climate activist recently turned climate realist.

https://www.dailywire.com/news/shellenberger-on-behalf-of-en...


Here's a response for diversity of viewpoints: https://sciencefeedback.co/evaluation/article-by-michael-she...


Do you have a mirror? It's behind a paywall


http://archive.is/wLPcG

Allegedly a postscript was added to the first article referring to the facebook fact checking, so I imagine this is actually the original article and not the updated one.


This problem was created by social media by choosing which articles to show in your news feed based on engagement. That is, how much you'll emotionally react to it.

The solution is not to add another layer of "fact checking". That's super dubious.

Get rid of what got us here.

News feeds organized by engagement are evil. Dismantling them would solve both the production and consumption of emotion driven information.


Traditional news media is also organized by engagement: an entire department is responsible for picking the home page of a newspaper and importance of each story.

The major publications do a better job at not sensationalizing everything, but this has been a problem since 1890 at least: https://en.wikipedia.org/wiki/Yellow_journalism

That social media naturally tends to become a tabloid magazine may be more a reflection on readers than the editor (whether human or machine).


I agree. That's a problem, too.

The reason it became so pronounced is that social media got very good at it and then taught the lesson to traditional media. They, too, now use metrics of engagement to choose what they show.

It is true also that it is a weakness of people. But there are other weaknesses that we regulate in the interest of a healthy society.

I'm not a fan of regulation but a general awareness of the core issue would go a long way towards a solution.

Right now, the talk about the issue seems misdirected. I can only see the proposed solutions putting immense power of controlling narrative in a few hands.


Facebook warned me yesterday when sharing a satire article that the article was over a year old and the facts may not longer be current.

Thank you for protecting me Facebook gosh darnit!


Heh, satire can get stale if overtaken by reality.



I don't understand why it's Facebook's role to teach people. Maybe if regulations were instead imposed on school curriculums we wouldn't have dumb adults arguing about climate change, flat earth or mask wearing. Combat issues at the root not the fruit.


Facebook endumbens people. It's there front and center of the stupidification process, whether it wants to be or not. And it wants to be: it's aware that the mechanisms it uses to attract eyeballs (and thus revenue) favor lies and propaganda over facts. Facts are boring. Proof that your ideological enemies are out to get you is endlessly fascinating.

They want to have it both ways, to remain sticky but without having the world become a worse place for it. I'm sure they'd love it if schools didn't also contribute to the increasing gullibility of the US, but that's not something they can control. Their own feeds, by contrast, is something they have power over.


> Facebook endumbens people. It's there front and center of the stupidification process, whether it wants to be or not.

Facebook isn't making anyone dumb. They were already dumb; you just didn't notice before.


For more evidence consider magazines from the front counters of grocery stores. That stuff is at grocery store checkouts, by popularity it's more representative of what people like than say... pg's essays.

That Snapchat's (curated!) discover tab looks like that suggests to me that this content really is just popular with people and even human editors recognize that.


You're correct that they were already dumb, but it does actively remove knowledge from their heads and replaces it with harder-to-budge falsehoods. That sclerosis of the neurons makes them act like people even dumber than their natural state of stupefaction.


With Facebook there is at least a chance, however minuscule, that you are exposed to different ideas. With media like Fox News, it’s straight out propaganda. This is what’s causing the polarization; the conspiracy theories and extremism have to come from somewhere. Why go after the symptom and not the cause?


The cause is all mixed up in cause and effect. Fox News exists because there are people who want it. It amplifies what they believe and gradually pushes them to believe more extreme forms of it, but it's just one part of a huge ecosystem -- politicians, right-wing radio, religious leaders, etc. The common factor all of those have is the population.

Facebook doesn't have control of any of those things. The only thing it can control is itself. But it can at least reduce its own part in the way the people and the media perform for each other to be even more extreme. And Facebook is in a rare (if not unique) position to be directly in front of that population: it can't make Fox News go away, but it can put a flag in front of Fox News stories.

The "cause" is ultimately the individuals, not the media or the politicians or the wealthy think-tank funders. They don't have mind control. All they can do is appeal to people with what they already want to believe and amplify it -- a process that's been going on for at least four decades. Reversing it will take at least as long. I don't expect FB to solve it, but I do think it's a good idea for them to not allow themselves to be part of it.


It isn't! When Facebook announced the program [1], they made it clear that they had no interest in becoming arbiters of truth; fact checking was about preventing hoaxes, stories like "5G causes coronavirus" that are just completely and unambiguously made up. The expectation that Facebook fact checkers should intervene in live political controversies was inevitable IMO, but I don't think many people would explicitly endorse the idea that Facebook has to teach people good climate science.

[1] https://www.theverge.com/2016/12/15/13960062/facebook-fact-c...


Facebook moderation and education reform are not mutually exclusive.


According to the piece, the fact checkers claimed an opinion piece was false because all of the facts in it were cherry picked:

"The researchers found that the post by the CO2 Coalition was based on cherry-picked information to mislead readers into thinking climate science models are wrong about global warming."

This is not a logical argument. Facts are correct or incorrect, whether they're cherrypicked or not.


cherry picking truths and listing them out of context (i.e. wildfire reduction without mentioning land usage changed) may not be fake news, but it definitely smells like propaganda

not defending Facebook actions, which are wrong for a different set of motives detached from the actual article quality, but this isn't the censorship smoking gun people want it to be


They claimed it was misleading, no?


Question: facebook as a platform is not liable for information it contains. However if Facebook would publish fact checking info, that is wrong or missleading, are they liable?

Could I sue facebook if their "facts" were proven wrong, and cause me damage?


> Could I sue facebook if their "facts" were proven wrong, and cause me damage?

Per the landmark ruling NYTimes v Sullivan — no, you'd probably lose that case.


This case seems to apply to public officials, I assume it would be different for private citizens?


The case was specifically about a full page ad that the New York Times had taken out that contained a number of factual inaccuracies, such as the number of times King had been arrested during the protests, what song the protesters had sung, and whether or not students had been expelled for participating.

GP asked:

> facebook as a platform is not liable for information it contains. However if Facebook would publish fact checking info, that is wrong or missleading, are they liable?

> Could I sue facebook if their "facts" were proven wrong, and cause me damage

NYTimes v Sullivan held that unless the "facts" proven wrong are specifically about a private individual, you can't sue for damages if a newspaper publishes information that are factually incorrect. If the NYTimes Editorial Board published an article that claimed that the sky is green, you can't sue them for that. More practically, if the NYTimes Editorial Board published an article that "climate change isn't real", you can't sue them for that either.

Likewise, if Facebook published fact checking info that's proven wrong or misleading, they won't be held liable unless the fact checking info directly defames a private individual.

Also note that this is US-centric. Outside of the US, this kind of publisher speech protection doesn't exist, and such nations can cause legal problems for Facebook. Either FB would have to hide the contents from citizens of those nations, or stop operating there altogether.


They'll probably punt the responsibility to the third-party 'fact checker', even though they selected the fact checkers


if they've gone beyond publishing user content and are making their own I don't see why not


You can sue anyone for everything. Do you stand a chance? No.

The platform/publisher dichotomy is mostly just a right-wing talking point with absolutely no basis in law. As your example alludes to, a website can well be both at the same time.

For text that is Facebook’s own editorial content, the liability standard generally applied to any speaker, but historically tested in relation to news media and book publishers would apply, which is one of “malicious intend”: you need to prove that they wanted to cause the harm your suffered.

This is a rather high standard, but it follows from the US’s regard for free speech. You are likely to encounter additional difficulties trying to prove and quantify the specific damages you incurred.

But Facebook’s fact checking is set up somewhat differently, with third parties doing these checks. It seems almost impossible to make a case for intent against FB for a correction authored by any of these organizations.


Facebook is an international company and should comply with local laws, if it wants to do local advertising.

My country bans propagation of certain ideologies, due to our history. For example Heineken can not use its logo here. It would be criminal offense, not civil lawsuit.

Another possible problem is anti free Hong Kong propaganda and lattest presidential decree.

I dont really care about politics. Just showing it is incredibly thin ice.


The red star, specifically?

Eastern Europe?


I’m not really sure what people expected from Facebook here.

The oldest of newspapers struggle with these kinds of judgements (until they just decide to stop caring and cater to a certain political side).

It is not reasonable to expect Facebook to get it right with any consistency, if there even is a definitive correct answer to begin with.


Consistency is the word here.

Facebook has a policy that they put in place that's fairly reasonable, they have outside fact-checkers they acknowledge as experts. They deliberately set aside the mechanism they put in place to avoid this kind of nonsense.

It is entirely reasonable to expect Facebook to apply the policy that they themselves created in order to avoid having to make these kind of calls.


Or we could get rid of Democracy and back to kingship/dictatorship. /s

But on a serious note. It seems that the requirement of functioning Democracy is under the assumption that the majority is educated and know what is good for them. Information warfare like this spells disaster for Democracy.


Stop using someone else's domain and server space to act like that is where you deserve unhindered free speech. I'm so tired of hearing the bitching about Facebook or Twitter on HN lately. The future of the web will be decentralized. It never made sense to use your real identity and then go and post totally private conversations, unencrypted, on someone else's machines! Create your personally hosted cheap blog and post your thoughts there, where YOU control them. I think Facebook should start charging like $50/month to use it so it would wake everyone up to the simple fact it isn't your service. It isn't your server. You are choosing to be moderated in your "private" conversations, because you choose to do so. We should have millions of personal websites and blogs that are solely controlled by the individual but instead, millions of people flock to private web servers, for free, and act like they own it!Absurd.


I think the problem with Facebook and similar platforms is that they tend to merge everything into one stream. You see news, opinion, humor, satire, and ads all in one continuous feed, often not labeled as to which type it is (except ads usually say "sponsored" or something like that).

Furthermore, they are designed to get you to consume as much of this stream as possible. They would rather have you shallowly engage 120 article in an hour than deeply delve into 2 articles.

(In fact, their stupid system actually penalizes you for going too deeply into an article, but I'll save the rant that would follow if I went into detail on that for another time...)

This results in their readers often failing at telling news, opinion, humor, etc., apart.

They should probably separate stuff out, like newspapers and magazines usually do. Then fact check everything in news, and perhaps have a disclaimer in the opinion section clearly stating that the articles are the opinions of their authors and have specifically not been fact checked.


This was Neil Postman's critique of (pre-Facebook, pre-WWW) media in Amusing Ourselves to Death, originally published in 1987.

https://www.worldcat.org/title/amusing-ourselves-to-death-pu...

The underlying criticism of news media is rather older.


>Furthermore, they are designed to get you to consume as much of this stream as possible. They would rather have you shallowly engage 120 article in an hour than deeply delve into 2 articles.

Is this actually true? This sounds to me like a 2010s understanding of advertiser supported content, where the important thing was getting page impressions. The result back then was clickbait, because they were just trying to drive views. But basically most of those business models have died off. Advertisers are far more interested in accessing a strongly identified demographic and are willing to pay a premium to get their adverts in front of people. Which would sort of map to Facebook wanting you to deeply engage with specific topics, rather than to simply skim lots of pages.


The bottom line is that much of the climate science is nothing more than conjecture and you can get as mad about it as you like, and call it "misinformation" all you like. There is no evidence. Several million years ago we can tell that ppm levels of CO2 were over 1,000 and could climb higher than that. Today, we are way lower in comparison, and we face more pollution than anything. I find often times that people like to argue and conflate pollution with global warming. We have evidence of pollution. We don't have evidence that the poles shifting and climate shifting is actually man made, or that it WILL factually be detrimental. The planet is still here and spinning although it's seen CO2 atmospheric levels over 1200 ppm/ppb in past centuries, prior to industrialization. So much drama is involved in these tightly held opinions based in conjecture.


Facebook is arbitraging manpower here.

Firstly, this article is about American issues, so it’s only the America issue “lens” that is being used to focus on issues. These same and similar issues must exist for every other country, many of whom have 0 ability to fund and afford individuals who can dedicate their time and lives to fact checking their Local power structures or narratives.

Secondly, even the constrained solution/issue space is too deep for Facebook to effectively police - leaving aside the philosophical arguments.

Removing statements which say “all minority people need to die” is easy,

Removing opinion pieces ? Suppose Kanye says he doesn’t believe in vaccines and that is his opinion. Remove ? Keep?

Is it noteworthy because a huge personality said it ? What about people who are B list celebrities. Or C list ?

What about an article which says “we find that the science on climate change makes claims that are not supported entirely by the data, here is why and here is research.” -

Now you need specialistS or have access to a network of diagnosticians who can tell you whether complex pieces of content are Trojans or not.

Facebook and social media enterprises are paying the price of the hollowing out of editorial boards. All those local newspapers and newsrooms where stuff gets filtered - that is the likely amount of (Skilled) manpower Facebook needs to employ to handle this.

This is where the social media model totally collapses - you can have issues with climate denial, fake medicine, voter suppression etc. - and sub divisions of these problem.

There’s never a shortage of problems and a fractal level of complexity.

How do you ever satisfy the verification needs at this point?

Therefore social networks will always have massive numbers of bad actors, subterranean cults, hidden mind viruses infecting armies of people, propaganda and legions of victims.

This isn’t an idle question - the problems here are about the nature of how humans structure Their networks, verify information and build trust.

Facebook or any social network will always fail at this, unless they see this as what it really is - time to make editorial decisions on content on the network at scale.

The Silicon Valley hey day ideal of free speech is dead. We’re now in a rear guard action to ring fence people from infectious and malicious ideas.

If you think this is a step too far, please remember that America has people actively fighting wearing face masks during the greatest modern Pandemic.


> Removing an article which says ...

This is no different from removing an article which says you shouldn't use vaccines because they are ineffective and cause Autism.

When the facts which are used to back the opinion are false, then the factual portions of that opinion piece are subject to fact checking. Otherwise, there is zero point to having Fact checkers or any sort of moderation.


When facebook hired the daily wire to be part of its "truth" everyone knew is was going to be trash.


maybe this is because Zuck flies between Hawaii and SF about 2 times every month? (just guessing based on his freckle count)


[flagged]


Ars has become a bigger echo chamber than most sub-Reddits.


If they didn't allow opinions, then the platform is no longer a discussion platform.


It was never a question, if they allow opinions, or don't, but always what are you allowed to publish on their platform.

And as in any forum, private or public, there are restrictions in place. The question is only, which. If it no restrictions where in place, then it wouldn't be longer a discussion platform either.


The question is not whether or not opinions are allowed. No content is removed by this policy. The question is whether or not a fact check correction/disclaimer should be added to incorrect articles, and whether or not being an "opinion" piece acts as a way to evade the rule.


Facebook has devolved into the Karen social network.


According to the Ars piece, the outsourced fact checkers claimed an opinion piece was false because all of the facts in it were cherry picked:

"The researchers found that the post by the CO2 Coalition was based on cherry-picked information to mislead readers into thinking climate science models are wrong about global warming."

This is not a logical argument. Facts are correct or incorrect, whether they're cherrypicked or not.


But we don't want to be the arbiters of truth!

Has this company ever been honest about anything in its entire history? Do any of the execs have mirrors in their houses so they have to look at who they are every now and then?


First sentence of the second paragraph:

> Facebook does not employ fact-checkers directly but rather works with a range of third-party organizations to rate how true or false content shared in categories is.

The checker in the particular incident in question was Climate Feedback. The article doesn't have a link to the original Daily Wire article.


Consider the outrageous claim that Tucker Carlson made today (that NYT was "planning" to dox him). Was it true? Did he just cleverly "pre-empt" it? Or did he intentionally create a story to teach a lesson to the NYT reporters?

Things are made much murkier when you then find out that, yes, indeed, he was actually doxxed before once. And people did actually show up to his house to protest. And even more amazingly, the first person account of this "witness" actually makes you believe if Tucker has a point. Why did they go to his house at night? And how can anyone, from the inside of a house, distinguish "firm knocking" vs trying to break down the door. Will you be calm enough and just say "Oh.. I thought he was going to break it. But it was just a firm knock, now that I think about it. Phew.. "? And why in fuck's name would you actually spray paint anything on Tucker's property? He is going to refer to it as vandalism, because if someone came to your house and peed on the front door, that's exactly how you would refer to it too. Or are you instead going to say "Well, it's just some organic material, it can always be wiped off"?

https://archive.thinkprogress.org/i-was-at-the-protest-outsi...

At this point, there isn't actually any reason to believe one side over the other anymore. It is just complete hysterics on both sides.


Any aspiring authoritarian knows that once the public can be convinced everyone is hysterical and biased, it's that much easier to get the average Joe to throw up his hands and say "whatever, I'll just support the strong man who will help me get mine, since other politicians are just going to get theirs, and I certainly can't trust the media." Tucker invented a story, regardless of his personal history, likely to deflect from the fact that he was recently named in a civil suit that details sexual assault by a Fox employee. Whatever your background, making things up for TV is not defensible. EDIT: and if his personal history induces him to invent narratives, it's clear he doesn't belong in his current profession


At least with corporate TV news, we have to remember there is no "one side over the other". It's mostly one side if clustering them by their economic goals.

These are business entities that can quickly create short segments (based on the current events) designed to stir up emotion and interest among their targeted market segments to keep their eyeballs coming back and engaged while the TV ad is play.

FOX has their segment, MSNBC learn from them and has theirs, CNN a bit less defined and as a result they are suffering compared to FOX and MSNBC.

These entities are run by a group of executives that are focused on generating revenue. I don't believe MSNBC's executives are any more or less different ideologically than the FOX's executives.


"At this point, there isn't actually any reason to believe one side over the other anymore. It is just complete hysterics on both sides."

Welcome to the skeptical moderates' table. We actively seek falsification of political assertions, particularly if it fits neatly in an overall narrative, particularly if we would like to be true. We are a small and beleaguered group.


> Consider the outrageous claim

Is it really outrageous when NYT has previously threatened to dox people? Most recently the owner of Slate Star Codex, which recieved a lot of attention on HN.

https://slatestarcodex.com/2020/06/22/nyt-is-threatening-my-...


Yes, because you can't dox known public figures by definition. Donald Trump resides in 1600 Pennsylvania Avenue. Look I just doxxed the president!

To claim knowingly claim something impossible by definition is outragous.


Just because Tucker is a public figure doesn't mean his private residence needs to be publicized by the NYT. Regardless, the rest of his family who live with him aren't public figures.

Willfully conflating the White House and Tucker's house is disingenuous.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: