We just want to see how and what is defined as “false info”.
As any one who’s read snopes or fact check, while the fact checks often point out actually incorrect information, some of the time, they reframe the question/narrative, assume an inaccurate scope or apply semantics in a convenient way and label something that isn’t necessarily false as not false that reflects the bias of the author.
It leads to a basic mistrust of what’s “false info” and what is not “false info” and so some people have a hard time trusting these types of studies and the basic premise that platforms should ban “false info”.
I feel Twitter’s community notes function is actually pretty good at combatting false info without creating a centralized source with the above issues.
> I feel Twitter’s community notes function is actually pretty good at combatting false info without creating a centralized source with the above issues.
I agree, it's pretty good. Except when it gets applied to one of Elon's posts and then mysteriously disappears.
BirdWatch ("Community Notes") is a simple reproduction of reddit top comments on Twitter. And like u/spez, Elon just can't seem to resist hopping into the database and monkeying with things.
But thinking about it more, we are not saying that all information has to be true or that someone can't be wrong.
I think it is also easier to identify obviously false information than it is to identify true just due to the nature of discovering things. It is easier to rule out than identify. That also helps combat the "censorship" claims.
So it seems like it would be seeing a pattern of false information. Maybe they do legitimately believe what they are saying because they fell down some rabbit hole. But if a majority of subject matter experts would say something is false and that happens over and over again for someone, than they are likely not contributing much to the platform.
> As any one who’s read snopes or fact check, while the fact checks often point out actually incorrect information, some of the time, they reframe the question/narrative, assume an inaccurate scope or apply semantics in a convenient way and label something that isn’t necessarily false as not false that reflects the bias of the author.
Do you have an example of this, on Snopes or another fact checking website?
I would say that reading about the fact checks of the Wuhan lab leak theory during the pandemic were a pretty good example of this, where just because something might likely be of "natural origin" doesn't mean that it didn't escape from a lab.
So they would criticize someone who said it leaked out of a lab with evidence that suggested that the virus wasn't man made. However, that evidence doesn't really have much to do with:
1. Could a virus of natural origin escaped from a virus lab unmodified?
2. Could a lab that was engaged in research trying to modify a virus have accidentally let the virus escape and the research was the actual source of the virus?
To be fair, I personally don't think anyone can say for sure what really happened, but by manipulating the scope of the question to a very narrow man made virus escaped from a lab when the original speaker just said it leaked from a lab, your claim can easily be labeled as false.
allsides.com has done reviews of the fact checking websites. The last one for Snopes was in 2021, it says:
"We reviewed the numerous instances of Snopes' left-wing bias that we found during our June 2020 Editorial Review, such as slant. We also noted a number of times that Snopes had recently interpreted things in favor of the left, including when it "fact checked" a subjective opinion on Alexandria Ocasio-Cortez (D-NY), when it defended Gov. Andrew Cuomo by saying an accurate tweet about him was "Mostly False," and when it "fact-checked" satire from humor website The Babylon Bee (an entry Snopes then had to edit following criticism)."
So a few things... one, I don't think you can say a fact checker is slanted one way or another by finding a few examples of a bias; how do we know there isn't an equal or greater number of instances where the examples slant the other way? You would need a systemic analysis of all the articles with some rubric for grading.
Second, I find issue with the examples they did use. They say the tweet was actually true, but it doesn't seem that way to me. The tweet argued that Cuomo said that the vaccine being developed under Trump was bad, but the thing he actually said was that he thought the roll out plan would be bad. Very different.
"The panel looks for common types of media bias such as slant, spin, sensationalism, and story choice. We review the outlet’s homepage, headlines, recent articles, photos, and other content dating as far back as six months using online archival tools like the Wayback Machine. Taking into account all perspectives, panelists individually assign a number, between -6.0 and +6.0, that they believe best represents the bias of the media outlet. These numerical ratings are then transcribed into a weighted average."
>> The tweet argued that Cuomo said that the vaccine being developed under Trump was bad, but the thing he actually said was that he thought the roll out plan would be bad.
The statement that Snopes fact checked was "Did Gov. Cuomo Say It Was 'Bad News' Pfizer Vaccine Progress Came Under Trump?".
Cuomo said "The good news is the Pfizer tests look good, and we’ll have have a vaccine shortly. The bad news is that it’s about two months before Joe Biden takes over, and that means this administration is going to be implementing a vaccine plan."
If someone is going to call the original statement "Mostly False", then yes, I would consider them to have a left bias. Cuomo is saying that it is good that we had vaccine progress, but bad that it happened before Biden was President. That sounds like an awful thing to say, and from my perspective, Snopes is covering for Cuomo who said something he would probably in retrospect like to have said in a different way.
Leave it up to the courts to decide. It seems to be the only place that the truth matters now.
If you cannot reasonably prove you acted in good faith - or cannot prove you did not knowingly make false statements it’s (and now that they are proved false you retract and remove) then you are a problem.
What that looks like in actual practice I don’t know but burying our heads in the sand and saying “it’s to difficult to stop” is not a good position to have.
Even if "banning false info online can improve discourse" that wouldn't make the required censorship OK. Either we're an Enlightenment society, or we're not. Which is it?
Community notes seems to get stalemated as soon as any demagogue starts saying things that aren't true. The demagogues followers make it so the community note never appears.(Just look at all the false stuff elon musk has tweeted.)
Our entire current system of spreading information seems to allow liars who get popular on basically false outrage bait to have a huge megaphone. I'm not sure how to stop this without a centralized authority shutting it down.
> Our entire current system of spreading information seems to allow liars who get popular on basically false outrage bait to have a huge megaphone. I'm not sure how to stop this without a centralized authority shutting it down.
Centralized authorities are immune from demagoguery? You would throw out the first amendment to protect us from liars getting megaphones?
That's basically the whole game in politics right now. Spread lies to your base, then when called on it, turn around and say "Well, 50 million people believe this is true. We need to respect their opinion."
>The demagogues followers make it so the community note never appears
>Just look at all the false stuff elon musk has tweeted.
>allow liars who get popular on basically false outrage bait to have a huge megaphone
Do you have any kind of evidence or proof, for any of these claims? It sounds conspiratorial and emotional, especially when you say "it seems" about something that is the opposite of what I've seen in the CN program.
>I'm not sure how to stop this without a centralized authority shutting it down.
"There was a spillover effect," said Kevin M. Esterling, a UCR professor of political science and public policy and a co-author of the study. "It wasn't just a reduction from the de-platformed users themselves, but it reduced circulation on the platform as a whole."
I believe the term that you're looking for is "chilling" not "spillover" effect. This not a novel effect of censorship by deplatforming.
> it reduced circulation on the platform as a whole
So it also doesn't take into account the potential long term toxicity of driving everyone with certain views to other platforms, where those views will be reinforced. With those platforms having the appeal of their "taboo" helping to market them.
Free speech doesn’t survive contact with AI slop guided by disinformation prompts.
It isn’t that it isn’t worth protecting or caring for, it’s that it disappears under the noise floor. The bad actor doesn’t have to silence the free speakers, he just needs to amplify the bullshit so nobody can hear anything that makes sense.
No, nowhere is the article stating that only one viewpoint is allowed.
For example, if there were several scientists that were debating more precise ways of calculating the circumference of Earth, the discourse would improve if flat Earth information was excluded.
Framing people as “false info traffickers” sort of reveals to me that these researchers sort of do believe only one viewpoint should be allowed? They talk like military censors do during wartime, or political commissars.
> Are we saying that the conversation is less angry when everybody basically agrees?
diversity attractive until polarized
Claims: Anger is firstly a masking emotion for the immature sadness that others' values are not the same as one's own. Anger is secondly an emotional manipulation strategy to enjoin others.
Censorship works great. That’s why we’ve been doing it for so long. It’s just that censorship is good for the censors not for the censored or the general population. Occasionally a censor will be benevolent and effective but one can’t count on such things.
Who claims that? I explicitly want censorship to target innocents. There is no single person's speech (even my own) that we can't live without. I would much rather the banhammer come down a few feet over the fence so that people can't get too close to it. I say this as someone who both wields and has been hit with the banhammer. You store your hammer in the freezer specifically for the chilling effect.
Strong moderation in communities does make those communities better. We're in one of them. Your workplace is likely one of them. Your social sphere is one too hand curated by you. IRL we call this social norms and boundaries, but when it's online or in print people get weird about it.
For each domain, we determine whether or not it is a fake-news domain by referring to a predefined list of fake-news domains. Our list of fake-news domains is defined by combining data from Grinberg et al.8 and the proprietary list from the journalism company NewsGuard (https://www.newsguardtech.com/newsguard-for-researchers), using the 29 March 2021 snapshot.
Yeah, no thanks. Many things were misinformation until they were true. Don't forget the lesson of Li Wenliang[1], who was formally censured for "publishing untrue statements". We now know him as one of the first to document COVID-19.
Yes, this happens. But giving up and letting the lies fly willy-nilly has a still worse record. Every measure to combat disinformation is imperfect. Many are better than failing to combat disinformation in any way. And lumping the Chinese state together with fact checking organizations is not a good argument.
The Chinese state is a fact-checking organization, with all the foibles and follies that are associated with letting people (e. g. free-willed beings) fact-check other people.
Fortunately, reality allows for multiple inheritance, the Chinese state doesn't have to only be a fact-checking organization.
Multiple inheritance is a bad metaphor to invoke here, since that is when a class can have multiple parents, not when a class can have multiple instances.
The comments I've found from Fauci about the lab leak theory are as follows:
> "If you look at the evolution of the virus in bats and what's out there now, [the scientific evidence] is very, very strongly leaning toward this could not have been artificially or deliberately manipulated … Everything about the stepwise evolution over time strongly indicates that [this virus] evolved in nature and then jumped species,"
You also have to remember what else was going on around this time. Namely, that the president was claiming to have seen evidence that it was a man-made virus from the Chinese, and suggesting it was a weapon, calling it the "Kung Flu", the "Wuhan Flu", etc. There were significant sociological impacts from this, including a dramatic increase in violence against Asian-Americans.
The National Review is an extremely right-wing News Source citing a House Oversight Committee letter written by two absolute political hacks: Jim Jordan and James Comey. It attempts to hint at Fauci commissioning a paper that attempted to debunk the lab theory, but is very imprecise and hand-wavy. The article never comes close to backing your initial claim that:
> Fauci said from day 1 that this was not a lab leak and anyone saying that should be discredited
Let's go straight to the source of that National Review article, the House Oversight Committee's 2022 letter to Secretary Becerra [1]. If wade through the innuendo in the letter, it's pretty clear that:
Dr. Kristen Anderson sent an email to Dr. Fauci expressing her worries about whether the furin cleavage site in SARS2 was naturally evolved or a product of genetic engineering. To address Anderson's concerns, Dr. Fauci organized an online meeting via Zoom with her and a group of other specialists, which successfully dispelled Anderson's fears.
The fact is, following the Zoom conference, Anderson, along with three other participants, authored a research paper on the origins of COVID-19 that was published in the prestigious journal, Nature, in March 2020. This paper was not directed or "commissioned" by either Fauci or Collins. While Anderson did share advance copies potentially with them and others for their remarks, a common academic convention, there wasn't any attribution given for their possible input. Suggesting that Anderson wrote the paper to advance Dr. Fauci's political agenda is just (yes) conspiratorial nonsense.
Republicans in general don't like China or the concept of expertise (almost entirely because the CCP has "Communist" in the name, and expert opinions tend to refute the majority of things Republicans fight for). Dunking on China is evergreen political fodder for the right, no matter how outlandish. In 2020, (and still today, as far as I've ever read), the preponderance of scientific evidence points to the origin of covid being zoonotic.
The atmosphere of speculation by politically-motivated buffoons didn't help clarify any of this, and continues to confuse people today.
Yes, lets believe China because Republicans hate them so they must be right. China stopped any investigation in every way possible to cover up ANY possibility to find evidence of anything. Days after the first outbreak in the market they destroyed all of the evidence...almost like they knew the virus was deadly.
How about the Russian disinformation campaign about Hunter's laptop and the government insistence that social media discredit and actively stop any spreading of the information. Dozens of intelligence agents said it was not credible and shouldn't be trusted. You know, the same laptop that was introduced into evidence today in Hunter's trial.
The Chinese state is a fact checking organization. I see no difference other than the fact the Chinese state has power and this is already extremely warped and corrupt, while the other fact checkers are merely striving for similar power.
The only implementation of fact checking I’ve seen which has yet to turn corrupt is Twitter’s Birdwatch which is a much better record than these journalistic organizations we call fact checkers who feel they should control truth.
Amoebas and blue whales: they both respirate. I see no difference other than the fact that a blue whale is already extremely large, while the other is merely striving for a similar size.
It has not been verified. There was testimony today where someone said "I have not seen evidence of inauthenticity"; this was reported as "it is authentic", but in fact she meant "I haven't checked if it is authentic".
When a sitting President is banned for political speech and at the behest of partisan narrative, that's anti-democratic authoritarianism. Sorry, "anti-authoritarianism" doesn't mean that unelected people with the power to ban have equal democratic mandate with an elected president. In terms of possessing a democratically acceptable ability to restrict a communication channel widely treated as The Commons. Those who banned him were given full political cover, if not a mandate, by POTUS's political opponents and an unelected hostile media. Without which there would have been too much "norm" respecting political pressure to accomplish.
I have nothing backwards and your wordplay is overly simplistic and wrong, to be polite about it.
I'm not sure what you mean by banned or censorship, but Twitter isn't the government and I'm not sure how this example is "anti-democratic authoritarianism". Seems like they should be allowed to ban people from their platform who violate the terms of service. There are tons of platform/content providers that don't allow specific types of content even though it is legal.
If people disagree with Twitter they can leave the platform for another one or shareholders can act and have the firm change course. Trump still daily says whatever he wants on other platforms without the government censoring him.
What I mean is the undeniable integration of government agencies with Twitter Trust and Safety, at the time.
Right here, this ends our discussion.
But also the partisan support for the ban, on partisan grounds, from Congress and the government agency integrated Media. Which is government interference with the ability to speak via that channel.
All of this "should be able to do what they want" is a nonexistent reality. It's fantasy.
The Terms of Service, itself, was crafted under agency pressure. Though, this is a minor detail compared to the more ham fisted integration.
You can be willfully or otherwise oblivious. However, no one owes that type of conversation much engagement. It's your job to become aware of the facts and broken norms surrounding the event if you want to have further discussion.
Trump was the head of the government agencies at the time he was banned from Twitter. You're going to have to go down some strange rabbit holes to find the logic to work that out. It's perfectly reasonable that the people running Twitter didn't agree with Trump's politics and picked a reason to ban him.
How about Twitter having a preexisting, clear Terms of Service that ALL users must abide by?
Don't forget that Twitter was at the time and remains a private venue that was not and is not a government run organization or application. The Supreme Court has ruled time and time again that private businesses have the right to refuse service to whomever they wish for whatever reason. The 1st amendment covers what one can say in public, but explicitly does not cover private repercussions for said speech, ONLY government repercussions. Twitter is not public, and was not public. Private businesses and entities can and do control speech on their platforms and premises all the time, and have numerous laws protecting their right to do so.
How about this one:
User: Hi, I'd like to sign up for an account.
Twitter: Sure, you're able to do so. Here are the rules we've established for our platform. By signing up, you're agreeing to our rules.
User: OK, I agree. deliberately breaks rules
Twitter: You can't do that. We are warning you. This is a private business, and you agreed to our terms. Your actions are in violation of the terms you agreed to.
User: Waaaaaaaaaaaaaaaaaaah waaaaaaaaaaaaaaaah muh free speech deliberately breaks rules again
Twitter: We're not joking. You can't post that sort of thing. It's against our rules. Do it one more time and we will be forced to ban you.
User: deliberately breaks rules again
Twitter: we are fully within our rights to ban you, and we're doing so.
User: waaaaaaaaaaaaaaaaah political persecution waaaaaaaaaaaaaah I'm going to sue you!
It has nothing to do with who the user is or was. Private businesses have full legal right to exercise control over their platforms. You break the rules, you are banned, simple as that.
I apparently didn't clearly convey my message. He'd have to use some twisted logic to blame the banning of Trump from twitter on the influence from government agencies and not on Twitter. That is what the person I was responding to was saying. I don't disagree with anything you said, nor do I think my original statement does in the context of the comment I replied to.
Trump was not "the head of the government agencies", in terms of functional control. Implying conspiratorial thinking is not an argument against the concrete facts. Which are that a). The FBI was integrated into Trust and Safety at Twitter b). Trust and Safety banned Trump while he was sitting President, against Trump's will c) Supreme Court case law long ago determined that any government involvement in private corporation speech decisions has to be considered to be an unconstitutional government violation of free speech. "Reasonable" is both untrue and immaterial.
Everyone citing "Freedom" here is ignorant of the facts and are going to be unhappy and surprised with how this matter is eventually settled by investigative bodies and courts.
Trump was consistently pushing the envelope and crossing boundaries with his posts on twitter and finally decided to just ban him and face the consequences of that decision.
Your suggestion that the almost the entire media is controlled by the Democratic party or all very favourable to them is also very conspiratorial - it's extremely easy to find a massive amount of media that is comically critical of both parties at all times regardless of which one is in power at the time. Contrast this with actual authoritarian governments like China and Russia.
Nope. The FBI was long integrated into twitter Trust and Safety. At that point, Twitter's speech decisions ceased being a private matter according to Supreme Court case Law.
Please. Give us all a break. There's nothing conspiratorial about noting that the Media favors the Democratic Party. You're gaslighting, and moreover your effort is pointless. Fully half of the electorate knows this to be true, and that isn't going to change in our lifetime. They know it to be true because it has had terrible consequences.
It was the Media that both lent the political cover required for the Russiagate fiasco to continue for four years, effectively hamstringing an entire administration in spite of it going nowhere.
In addition, the entire nation witnesses, over and over again, that the media incites riots or demands prosecution of riots at their singular Will. Against the Will of Police. Against the Will of local governments and citizens. The Media incited and covered for nine months of domestic terrorist riots prior to the 2020 election, leading to the murder of almost thirty people. Essentially directing and covering for brownshirt terrorists for nine months before an election. The entire nation was terrorized for that extended period. To cover it up, they then demanded the strictest prosecutions of mimic rioters that were misbehaving for three hours, or even just trespassing with what they thought was tacit permission. The difference between the 9 month and 3 hour rioters? Media narrative and demands.
You can test out your spine on this matter, however ridiculous, but don't think that you'll get away with such assertions in this conversation.
Do you feel that there was any content that they could say that should have gotten them banned or do they just get to play by a different set of rules than the rest of us?
I remember reading something about how in old Twitter high profile political figures and representatives were allowed more leniency in how their account were moderated because it was considered more important that their message be seen out in the open. There's still some lines that shouldn't be crossed, but I don't think it's unreasonable to apply different standards for world leaders when their words carry the weight of their country. When a diplomat visits a foreign country they also don't have the same laws applied to them, broadly speaking.
Trump in particular was limit testing constantly, and eventually it became clear that he crossed too many lines. We also have the benefit of being able to look back and see that all of his attempts at proving the elections were rigged have failed in the courts.
No one should be banned for content that isn't illegal. Anything less promotes partisan abuse of censorship mechanisms, which harms democracy. First amendment law provides an adequate guideline, as enforced everywhere else and as defined by the Supreme Court.
But yes, government representatives are different than the rest of us as they are the public voices of their constituents. Silencing them is the same as silencing their mass of voters. Which, again, is deeply antidemocratic.
Today, too many people believe in the unresolvable paradox that democracy only exists when their views are spoken to and when they win. Continuing with the paradox: at the same time, they believe that those who are too much of a "threat to democracy" to speak (or win) have to keep believing in the system or democracy is also at risk.
Censorship "in defense of democracy" has rendered the national discussion to be ridiculous, on its face.
> No one should be banned for content that isn't illegal
It is almost essential for the protection of free speech that private entities can and do ban people for content that isn't illegal.
When you've only got two categories of speech, (1) legal speech that is allowed everywhere and (2) illegal speech that is banned everywhere, there is going to be a lot of pressure by the majority to expand #2 to cover speech that is currently not illegal but that the majority things is repugnant.
By allowing private entities to ban speech that they don't want to have on their platform even if it is legal speech we have a third category between #1 and #2. That reduces the pressure to expand #2.
> No one should be banned for content that isn't illegal.
Trump, through policy on Truth Social, disagrees with this statement.
> Anything less promotes partisan abuse of censorship mechanisms, which harms democracy.
So just to be clear, you are claiming that Trump, through policy on Truth Social, is harming democracy with this policy.
> First amendment law provides an adequate guideline, as enforced everywhere else and as defined by the Supreme Court.
First Amendment does not apply to the individual.
> But yes, government representatives are different than the rest of us as they are the public voices of their constituents.
And in that regard, they use that voice to speak within the chambers of congress. That have no extra rights above and beyond that guarantee they can use my personal resources to spread their word.
> Silencing them
Trump was not silenced. The claim that he was silenced is 100% false, and a lie.
Or 99% spam. Advertising your legal services [1] or auto body shop is not against the law in the US, and the CAN-SPAM act doesn't apply to social media, so people will use automation to post about their business 100 times per hour if platforms can't prohibit that via terms of service.
Why? I'm referring to what comprises "good", meaning democratic, policy for political content. Not an autistic implementation of rules or lack thereof. But if you want to plaster a Vote Biden poster on some poor working person, then maybe that's a discussion.
But to engage in the nonexistent autistic premise, just because I guess. It's illegal to show pornography to users under 18 years of age. Pornography can be easily blocked on that premise, utilizing a blanket ban and no opt in. 90%+ of users would be fine with that.
Also, "the rest of (you)" are not even beholden to the "same set of rules". Twitter was happy to serve as a platform for actual high level riot incitement and support for nine months prior to the election. That's actual, full throated support of nationwide murderous riots for nine months running up to a national election. No required creative interpretation of phrasing nor intent. The acknowledged SOP has become two sets of rules.
In was banned from multiple places on Reddit for spreading "fake news" that the BLM riots caused over a billion dollars in damage [0]. I was called a racist, fascist, White supremacist who was spreading neo-Nazi misinformation.
Also, there is an interesting recent study from a group at Vanderbilt University that looked at the effect that European speech regulations have in muzzling of online speech. Featured in Reason Mag.
They're two sides of the same coin. But, frankly, criticizing this as censorship and stifling of free speech ignores the very real dangers of online {m,d}isinformation, and prevents adoption of efforts to mitigate it.
The reality is that we're flooded with false information, which can be generated by anyone with enough resources and motivation to change how a group of people think and act. We've seen how this can corrupt democratic processes and influence the outcome of elections with the Cambridge Analytica scandal (which still continues today), and how it can be used to rally people and cause social unrest in many countries. Social media is the most effective tool for spreading political propaganda, astroturfing campaigns, and conducting any other kind of mass psychological manipulation, starting with advertising, of course.
The current state of online services is an existential threat to modern civilization, which will only grow larger with the increase of AI-generated content. Governments and companies should be doing _more_ to control this, not less. I'm not saying that I agree with mass censorship and total government control over public discourse, but surely there's some middle ground between that and the current situation.
There is absolutely a middle ground: have multiple choices of user-selectable filters on open platforms.
Subreddits are the closest thing to an implementation of what I want today. Let me go to moderated subsets of the content on a platform without taking away my right to expand my view at any time I wish.
Twitter et al allow you to do some moderation with blocking and follows. But opaque recommendation engines and new accounts present challenges for both.
I'm not sure if giving users control over what they consume would be a solution. If social media apps have proven anything is that the vast majority of people prefer the mind numbing experience of highly curated content, with infinite scrolling and a minimal amount of knobs and tweaks.
Users who want to curate their experience are outliers, and are probably less likely to be susceptible to misinformation to begin with.
I think the solution starts with strict moderation on behalf of companies hosting the content, accompanied by strict regulation of tech companies by governments. Along the way, passing regulation to adopt programs and initiatives that educate people about technology and critical thinking from a very early age will provide new generations with crucial skills to combat disinformation, which could hopefully eventually spread to areas of government and industry to further propel us in the right direction.
Tech approaches alone aren't the solution, and we need major social and political long-term changes as well. But strict moderation and regulation needs to happen as a start to stop the bleeding.
As any one who’s read snopes or fact check, while the fact checks often point out actually incorrect information, some of the time, they reframe the question/narrative, assume an inaccurate scope or apply semantics in a convenient way and label something that isn’t necessarily false as not false that reflects the bias of the author.
It leads to a basic mistrust of what’s “false info” and what is not “false info” and so some people have a hard time trusting these types of studies and the basic premise that platforms should ban “false info”.
I feel Twitter’s community notes function is actually pretty good at combatting false info without creating a centralized source with the above issues.