This article misses a crucial point: I do not want fact checked information. I do not want a ministry of Truth telling me what’s an acceptable set of statements to trust. I do not want a third party providing unsolicited opinions on how much I conform with the acceptable statements a ministry of Truth has blessed for public use.
I’ll provide three arguments. First, how will Facebook or any other entity that takes on the responsibility to fact check stand up to coordinated misinformation campaigns like “Iraq has WMDs” where an omniscient fact checker could have literally saved thousands of lives.
Second, how will Facebook or CNN or Snopes or any other entity fact check disagreeing experts on a subject - for example Sweden’s epidemiologists vs rest of the world taking opposing views on quarantine measures - is anybody at Facebook or any other place qualified to fact check Sweden’s top government advisers?
Third, how will Facebook or any single entity fact check unqualified experts on subjects? When Albert Einstein submitted his paper on the Photoelectric effect (which eventually won him a Nobel Prize) he didn’t have a Ph.D nor was he enrolled as a student - he was just an anonymous clerk at a patent office. Today’s reddit would brand him an “armchair physicist” conducting thought experiments. How on earth will Facebook or any Ministry of Truth attempt to fact check anything like this?
So, if you rule out Facebook’s ability to fact check coordinated government propaganda, government policy advisors that are literally balancing lives and the economy and fundamental advances in science, what else is there to fact check?
A fact checker can only end up becoming a re-enforcement mechanism for popularly held beliefs. Let’s ask Galelio how that worked out for him.
>First, how will Facebook or any other entity that takes on the responsibility to fact check stand up to coordinated misinformation campaigns like “Iraq has WMDs” where an omniscient fact checker could have literally saved thousands of lives.
It gets worse. We did find WMD's in Iraq [1]. But those were Chemical weapons and obviously not the WMD's we were looking for in the first place. So even your omniscient fact-checker would be implicitly lying, as while the statement “Iraq has WMDs” is objectively true, it isn't precisely true that they have the WMD's which there was a coordinated misinformation campaign about. Even with objective truth, a fact-checker could be selectively used and misused for nefarious purposes.
Yes yes yes. I am saving this comment I can quote it again in the future.
The development of any number of now widely accepted truths went through periods where they were viewed incredibly skeptically.
It's inconceivable to me that anyone could even want to _have_ the argument of wanting a central censor again. Haven't we already had this battle enough times? Is the evidence not incontrovertible that societies where people are free to speak and exchange ideas are better than societies that are not?
It's rational, I think, to want our software to offer warnings and controls that let us, _as individuals_, choose to hide, filter, or shrink-wrap certain information. Completely surrendering that trust to someone else is utterly insane.
P.S. I'm the CEO of LBRY, and this stance is pretty core to what we do. Email me at jeremy@lbry.com if you want to say hi.
You are missing the whole point. Facebook (and Youtube etc) do not simply show you all opinions in the world at the same time with equal weight. They select things for you to view based on your prior interactions with the website. So they can emphasize and deemphasize articles and videos to show you next. Across their entire userbase selecting articles that are based in reality will have a large effect.
This is the real point: Facebook's "Ministry of Truth" is currently a "Ministry of Clicks and Ad Engagement" and Facebook is happy to interfere in the free flow of information if they can make a quick buck off it. Likewise with Google searches, Twitter feeds, and so on. There is no such thing as a for-profit internet content platform which truly engages in some competitive free market. Reddit is sort of close, at least in principle - Google and Facebook aren't even on paper.
I would agree that this is gross and I don't personally use Facebook. But the idea that profit-motived filtering and selection is OK, yet additionally filtering out clear and dangerous misinformation is somehow 1984-level authoritarianism, is just dumb. Considering Facebook has been complicit in stolen elections and genocide, it's dangerously dumb.
I agree that I don't trust anyone to tell me what is true, but there is a base level of fact-checking that is beneficial: Remove anything that is completely fabricated and makes claims without a single shred of evidence. There is absolutely no need for this type of "news." No organization can definitely curate the truth, but almost any can sweep out the garbage.
The problem is that many of the things shared on Facebook et al are subject to debate on what the actual truth of the matter is, subject to one’s political biases. We’re not necessarily talking about content stating that the sky is green but say, the accusations Tara Reade made against Joe Biden.
As you have no proof "Q" is not real, you would be censoring people's speculations based on your own speculations.
And since there is no objective proof of God, the supernatural, witchcraft, numerology, astrology, aliens, or the efficacy of communism, you'd better be ready to censor a lot of folks.
...and you go to Facebook for your scientific and world news?
Facebook is a toy. Relying on a platform with a built-in way to reply to posts with nothing more than a laugh-cry-smiley to deliver you accurate news in the first place is ridiculous. They could never be a Ministry of Truth or whatever you're implying, because it's a toy. If they want to fact-check posts on their site, it changes literally nothing of importance. They don't, which is fine, but even assuming they did they'd never be capable of anything resembling anything in Orwell's novels.
People do go to Facebook for their news. Whether you like it or not, or think those people are fools, it's true. What Facebook decides is deserving of censorship matters because we have to live with those fools.
If you believe that Facebook is something more than a toy, then it seems reasonable that they'd have freedom of expression, press, speech, so on. As such, if people choose to get their news from Facebook, it's no different than them getting their news from Reuters, FOX, the New York Times, or Bloomberg. Surely you wouldn't imply any of them wouldn't have editorial control.
Comment sections on news outlets can remove whatever they want, I don't see why it'd be any different for Facebook, if you genuinely consider them a news outlet.
Legally im sure they can do almost whatever they want in terms of filtering/censoring/etc, so I don't think talking about their legal obligations or rights is very interesting.
They have behaved a certain way for their 15-20ish year history, and I'd argue that it was mostly as a neutral platform for communication, not really anything resembling a publisher, and they've gained an enormous userbase under this model. I think this is the correct way for a platform like facebook to operate, and that's in part why I use their platform.
I don't welcome them deciding to fundamentally change how they operate and begin filtering more content, especially due to what I view as partisan political reasons.
A toy, is that so? I know quite a number of people (engineers, marketers, companies) that exist solely due to Facebook advertising. You should let them know their livelihood rests on a toy.
>...and you go to Facebook for your scientific and world news?
Why do you frame it as some ludicrous idea? Is this more preposterous than getting your science and world news from an email chain, or from Fox/CNN, or your wonky neighbor next door?
I happen to have some intelligent friends who share thoughtful, measured news articles on Facebook. Perhaps these complaints of junk science and tabloids being passed around is actually a problem of an individual's circle and who they actively follow.
This post seems more about personal dislike of Facebook rather than the parent commenter's point about the dangers of appointing Facebook as a Ministry of Truth.
A toy, is that so? I know quite a number of people (engineers, marketers, companies) that exist solely due to Facebook advertising. You should let them know their livelihood rests on a toy.
There are physical shops for LEGO, too. Even third-party ones! Ooh, and King & Supercell are billion-dollar companies! I love toys, I think they're great. I don't think you should be getting news from them.
Why do you frame it as some ludicrous idea? Is this more preposterous than getting your science and world news from an email chain, or from Fox/CNN, or your wonky neighbor next door?
It's just as ludicrous as the idea of people getting their news from Candy Crush, MySpace or a tabloid. Getting news from an e-mail chain is ridiculous if you don't know anybody on it, or who runs it.
I happen to have some intelligent friends who share thoughtful, measured news articles on Facebook. Perhaps these complaints of junk science and tabloids being passed around is actually a problem of an individual's circle and who they actively follow.
Then surely you'd get the same information say, in person. Or using e-mail. As such, Facebook censoring them would just mean you'd move somewhere else. No possibility of a Ministry of Truth there. There are people in Second Life that act as newscasters, too. Still a toy.
This post seems more about personal dislike of Facebook rather than the parent commenter's point about the dangers of appointing Facebook as a Ministry of Truth.
I actually love Facebook. It's a neat toy. I love the research that comes out of it. If you search my HN profile you can see a lot of sentiment to this end.
You're right that many people's livelihoods depend on toys, but it's a social network, not a toy. Are netflix twitter and google time pass toys? Hacker News must be a toy/social media site too, then. All social media sites are toys, then?
You know as well as I do that the commenter I responded to attempted to use "Facebook is a toy" as a disparaging remark; this is an unnecessary semantics debate I won't continue further.
I'm afraid the reality is different with an enormous amount of people consuming their news through Facebook. I wish it wasn't so and people would think harder, but nevertheless here we are with enormous amounts of fake news spreading through these platforms. I do think that some form of fact checking would be better for some people their own protection.
> They could never be a Ministry of Truth or whatever you're implying, because it's a toy.
Well apparently not, politicians the world over are frightened of Facebook’s potential for spreading misinformation. Recall that the 2016 presidential election was apparently turned via $100K of Facebook ads that had little or nothing to do with the candidates in question.
The difference between Einstein working in the patent office and the 'armchair physicist' isn't that we live in a more cynical era. The difference is that Einstein submitted his work to be published in a reputable journal. If you have some ideas and don't want to be called an 'armchair scientist,' you are free to publish in a preprint journal and it will probably get accepted if the work is good. It has never been easier in history for an individual to share their scientific ideas with the world.
Personally, I could do away with all the armchair epidemiologists that have come out from the woodwork during this pandemic, and would much rather people and politicians be getting their information from experts who publish their thoughts for the scientific community.
If facebook could offer at least as much moderation as the newspaper stand of yore, the world would be a much less polarized place imo.
I've made this point a bit in my other comments, but largely I would say this:
As much as possible, you and others should consider their options.
If Facebook is as it is now, where individuals are free to spread misinformation (that is clearly convincing enough that it is spread), do you want to use Facebook?
If Facebook fact checks your information, which you do not want, do you want to use Facebook?
I know that depending on your use case, it may be difficult to find an alternative to Facebook. (I use it out of boredom!) But I think the ideal is that you continue to seek out better tools for interacting with people you care about, with finding and sharing information, etc.
Not only do experts disagree, but the prevailing wind of expert advice can change dramatically. In less than a month we went from being told coronavirus wasn't spread from human contact and masks kill you, to now it's highly infectious, you should wear a mask everywhere and the ventilators we'd been putting you on might be killing you faster. If FB thinks they can figure out how to censor counterfactual posts in that kind of climate, they are fools.
So the enormously popular post telling parents to make their kids drink Lysol should not have even the slightest warning next to it because the poster could be like Galileo or Einstein?
It might seem that way but, as the many posts bellow show, that is literally what is being advocated. There should be no disclaimer on posts advocating drinking Lysol because: the parents would have hurt the kid anyway, there are already warnings on the bottle and, the sole remedy should be punishing the parent. It is Poe's law incarnate.
I know (we all know) how hard it can be hard to let go when other people say wrong or otherwise bad things on the internet, but it's a skill we all need to develop. If you simply foment against the badness and wrongness, it only adds more badness. HN is an experiment in trying to co-create a place on the internet that doesn't collapse into rancor this way. Maybe you don't owe the person you're arguing with better (though maybe you do), but for sure you owe this community better if you're posting here.
Even if your interpretation was accurate, no one is going to give their kids Lysol to drink because of any comment in this thread. Meanwhile you do actual damage to this place by posting in the flamewar style.
I think you're begging the question and using reducto-ad-absurdum here.
dilippkumar was highlighting that if people are prepared to believe extremely fringe ideas then simply telling them that a central authority thinks a fact is wrong will not effectively dissuade them. Your question elides that concern by simply re-pointing out that fake information is problematic. Fake information is problematic, but there is reason to believe that simply labeling the article about drinking lysol "false" is a sub-optimal strategy.
So the answer to your question is no, we shouldn't avoid labeling that post false "because the poster could be like Galileo or Einstein," but I also do not think that was what dilippkumar was saying. I think he was trying to point out the problematic nature of "fact checkers" for people who don't trust traditional sources of authority.
If you'd like an example closer to home, ask the crypto community what they think of NSA advice on ECC curves.
Facebook is not the web, it is a toy devoted to advertisement.
It is not were potential Einsteins or Galileos debut their big new ideas. It is not were cover ups about WMD or anything else are initially exposed. All of that is very sensibly on real sites. To the contrary, rather than being where good ideas originate, it's sole purpose is to attract eyes for advertisement.
The catastrophic problem is their algorithm that massively pushes conspiracy theories, outrage and demagogue's propaganda that would otherwise never be seen and would never get beyond the crackpot website they happened to be posted on.
Facebook is not were any valuable information originates, it is instead where failed ideas and advertisements are endlessly push on it's users. But any attempt to correct that algorithm, manually or otherwise is now portrayed as censorship by those who benefit form it to get support from everyone else. Likewise, it is very effectively used as a propaganda tool by some parties yet ironically any attempt to slow that is portrayed as censorship.
It is not remotely where ideas fairly compete. To the contrary: shocking but false ideas are given an enormous advantage but adding an annotation contradicting them is called censorship. To such an extreme that there is a warning on Lysol bottles not to drink it but if a meme on Facebook telling people to drink Lysol has a similar warning attached by Facebook, it is called censorship.
The notion Facebook could ever afford to bother with any but a tiny silver of the most conspicuously false and harmful posts, let alone judge every post for accuracy, is not reasonable. It could not remotely be picking sides on every tv debate, reading every user's post or reviewing posts about the photoelectric effect for accuracy. And at any rate, Facebook is not the web.
dilippkumar was highlighting that if people are prepared to believe extremely fringe ideas then simply telling them that a central authority thinks a fact is wrong will not effectively dissuade them.
Maybe the goal shouldn't be to dissuade them. It doesn't have to be a conversion process. Perhaps if wrong-thinkers (for lack of a better term) can be convinced to look at all of the evidence, rather than the evidence that reinforces their existing viewpoint, that's a worthy goal. You may eventually convert those who are open-minded.
There are always going to be people who can't be swayed by anything, so maybe don't try. Perfect vs. good, and all that.
You can't save people from their own stupidity. I'd rather have stupid thoughts said out loud so they can be analyzed and detected rather than repressed inside ones own mind. I think that the non-sense that a lot of people are preaching online shows how much our education system is failing us. If this was covered up I might be able to delude myself into thinking differently about that.
In modern times it's easy to only notice the loons and crazy people that use their freedom of speech to push bullshit. But a lot of cultural and scientific progress throughout history has been made by individuals that choose to go against the norms of their time. IMO you need allow the crazy to get the crazy genius.
I've used this analogy in another thread, and I'd like to post it again to understand better the differences between public health and traffic laws.
"You can't save people from their own stupidity." Agreed, but how does that explain the rights-restraining acceptability of red lights and speed limits?
Red lights tell me, a law-obeying driver, that I must stop periodically while I drive through a city. Speed limits tell me, a safe driver, that I can't drive at whatever speed I deem safe. These are ostensibly done for "public safety," yet they place limits on what movements I can make. (For what it's worth, I have no problem with traffic laws.)
Note that these same laws limit pedestrians from movement such as walking on public freeways and crossing streets outside designated areas at designated times.
What makes limits on posting to Facebook different from traffic laws?
This isn't a snarky gotcha post, I'm trying to understand what the difference is between something with very little rights-related controversy (red lights and forbidding people from walking on freeways) and pandemic-related stay-at-home orders and Facebook moderation.
The difference is mostly that Facebook is a company creating restrictions for speech on their platform based on the views of employees and shareholders of the company on what speech should be acceptable on their platform. Rules of the road are determined by governments and all people have stake in the process.
For most publishers I don't mind them having their own rules on what speech they would publish because there are other publishers you can go to, but Facebook has almost a monopoly on social media. And because it's free for anyone to sign up it's almost impossible to compete with their network. If you want to post or publish social media content and have a lot of people view it you basically have to use Facebook's network. So you have to obey Facebook's rules.
It'd be like if a road company owned 90% of America's road and had rules you had to follow in order to drive on them. Sure there could be existing government rules of the road, but they wouldn't really matter because you have to follow the road companies rules on 90% of all roads.
It's fine to have restrictions on freedom of speech. We have them already. You can't lie under oath, you can't make death threats, you can't yell fire in a crowded room, etc. Now because of Facebook's monopoly they get to choose what speech is acceptable and what isn't at least for most social media. We don't really have any insight into what choices were made and why they were made either. I would rather have governments and their people decide this in courts and legal system. We at least we deserve more transparency into how Facebook chooses to filter and select content for people to view.
If the Lysol post was never made, do you think the parents who would have fallen for it will be safe for the rest of the duration of children's lives? Point here is that in your scenario the problem isn't the Lysol message, it's the low intelligence of the parents. If they fall for a post telling them to give their child Lysol, it's just a matter of time before they do something equally as dangerous and it should be surprising that they haven't yet done so. Are you going to monitor their interactions with the world 24/7 for the entirety of their lives? That's the only way you're going to accomplish what you're asking for here.
I'm no longer convinced that this is about intelligence.
It's easy to dismiss dissenting thinkers as "dumb hicks" or whatever epithet is trendy at the moment. But looking at a lot of the people who are promoting these theories on social media, few of them qualify as "low intelligence." In my observation, they tend to be celebrities, college-educated yoga moms, and even some high-profile tech types.
I work with low-education people. My department works with thousands of adults with an average fifth-grade education. In my experience, these people are more likely to follow the health advice of their local government authority than any crackpot conspiracy theory that floats through the internet.
Posts telling parents to make their kids drink bleach or disinfectant is, sadly, not actually a caricature. See, for example, how "Miracle Mineral Supplement" (aka, chlorine dioxide, aka bleach) gets pushed by malign quacks as a cure for HIV, malaria, hepatitus, autism, acne, cancer, COVID-19, etc. https://en.wikipedia.org/wiki/Miracle_Mineral_Supplement
This one gets harder as who has the better/right-er numbers? Is Bolosonaro lying? Probably. But how do you tell? Who's fact-checking the fact-checkers?
But the clear-cut cases aren't actually the problem. You don't need to censor things that are so obviously stupid. The bottle of Lysol says things like "Hazard to Humans and Domestic Animals" and "call a poison control center" etc.
To the point that it's incredible that the few people who actually do it are even telling the truth about why, instead of being cases of insane parents looking for a cover story when they want to murder their kids, or Munchausen by proxy or something like that.
Meanwhile the same principle gets you absolutely nothing in all of the non-obvious cases, because when the answer is non-obvious (and therefore much more problematic if wrong) then Facebook doesn't have it either.
Maybe he is, but if a system is to function properly it must work in the worst-case scenario.
Therefore, I disagree that this person is arguing in bad faith and is in fact attempting to point out where the system fails in a worst-case scenario such as this.
No. Kids of such parents should be rescued from their tormentors. Stupid people will not go away just because you install some sort overload truth provider.
Free Speech is the foundation of western democracies. Having #BigTech censoring and grading free speech is antithetical to this. If I want to verify the veracity of ideas and opinions I am currently free to browse the internet and do my own research. I don't need to be policed by a commercial website with highly dubious surveillance capitalism goals and have them tell me what to think.
I've sent a message to my district representative, encouraging a change to the law:
> In Green v. AOL (2003), the court established:
> There is no real dispute that Green's fundamental tort claim is that AOL was negligent in promulgating harmful content and in failing to address certain harmful content on its network. Green thus attempts to hold AOL liable for decisions relating to the monitoring, screening, and deletion of content from its network — actions quintessentially related to a publisher's role. Section 230 "specifically proscribes liability" in such circumstances. Zeran, 129 F.3d at 332-33.
> There is immediately a question why this "quintessential relation" only works in one direction, i.e. why even when "monitoring, screening, and [deleting content]", a provider should not be considered a publisher, and thus ineligible for immunity under Section 230. The courts may not agree with that argument, but I would say that it highlights a serious deficiency, and the law needs to be changed in a similar manner as Josh Hawley's bill: https://www.congress.gov/bill/116th-congress/senate-bill/191...
> In fact I would argue that his bill does not go far enough. Getting an FTC rubber-stamp certifying "no political bias" seems like a vague and squishy system that will be easy to game. Perhaps the law should state that any provider above a certain size which takes action to "monitor, screen, or delete content" that is not illegal, and does not respond to an appropriate claim by reversing this decision, should forfeit immunity under Section 230. This would be very controversial, because it effectively bans e.g. YouTube from having their own "code of conduct" independent of US law. But I think that is exactly what needs to happen - YouTube is far too big for a "community standard" to make sense, and any attempt to do this will be abused for censorship. These companies should be forced to engage with all of us in the making of laws about free speech, they should not effectively be able to declare what free speech is by fiat.
I have no idea why you're being downvoted. The legal background on this problem is useful, and highlights the real problem with platforms like Twitter, Youtube, and Facebook engaging in content moderation in any capacity beyond preexisting legal mandates (e.g. removing illegal content). The key to judgment in favor of AOL was their "hands off" approach. (https://www.eff.org/issues/cda230/cases/green-v-america-onli...) As soon as today's social media platforms give even the implicit guarantee to their end users that they've vetted content to some "standard", they're just asking for liability. Once they get into the politics game as per Hawley's bill seeks to prevent, they're gonna face FEC regulations and a host of state and local election law issues as well. Is this really the road they want to go down? I don't think the execs smug in the certitude of their vision of what their communities "should" be seeing and not seeing are thinking this through.
This is an excellent summary, and I fully agree with it. As a modest addition:
- In many important cases, there is no fact, just differing viewpoints and opinions. One man's terrorist is another one's freedom fighter. Violent insurgency for one, legitimate fight for liberty and justice for the other. In fact, reasonable people will even disagree on whether in some topic there are facts or only opinions, let alone about their truth value.
I think a better analogy is 'one man's investigative journalist is another man's conspiracy theorist'. We don't need #bigTech telling us which is which. Most of us can figure out who is dealing with real facts, events which have happened etc. The current crackdown on free speech is the equivalent to digital book burning. Free choice and free speech was ultimately reinstated after the european events during the 1930's.
This is absurd. There is an objective reality and things actually do happen. You can observe, record, and measure these things. Whether or not we say someone is a terrorist or a freedom fighter doesn't change the fact that they e.g. drove a truck bomb into a government building.
Yes, "they drove the truck ..." may be a fact, but why they did it, what they wanted to achieve with it, what effect it actually had, are opinion-based, and are far more interesting than the underlying objective fact.
Otherwise, you could report the JFK assassination by focusing on the weather that day and the tire pressure of his car, and the angle in which sunlight reflected on the windshield.
Apart from the aggressive nature of your comment, indeed, in most areas of interests to humans there is no objective truth or it is not as interesting as the motivations and interests of the involved parties.
According to your own beliefs, your statement above can't be true. But one thing about it is certain: It is not objective interesting to me in the slightest.
It seems to me that there's an obvious alternative to checking facts, but it's one that scares FB much more than fact checking. Suppose they were transparent! Suppose it was easy to see unobfuscated statistics about who is paying for what on FB. Suppose journalists could easily look up how much white nationalists are spending on targeted advertising in each US state. Suppose we could see how much Exxon, Chevron, Koch Industries are spending, or health insurance companies, or anyone else for that matter. Suppose we could also see how much engagement there is with each post from Breitbart, NYTimes, Young Turks and so on. If we had these facts, as Facebook does, we'd be off to a good start.
Why just stop at Facebook? All businesses operate on the model where they take dirty money and try to do clean things with at least some of that money. Heck, most charities operate this way. Let's just make everyone transparent. We should know if the Red Cross is receiving money from Smith and Wesson. Bad individuals and corporations make donations to good causes all the time. You can think of it as moral outrage, but personally I think of it as reparations.
The consequences of that action will be as follows: Most of these businesses AND charities will drown in the moral outrage and close their doors. And that will have a detrimental impact on society, because the good that those businesses/charities did will go with the bad.
Facebook has one powerful good use case. It makes it trivially easy to connect with relatives and friends far away from us. There are other platforms out there, but until Facebook, there wasn't much, and even now, Facebook is the simplest and most convenient of them all. If Facebook dies, this good use case goes with it.
> All businesses operate on the model where they take dirty money and try to do clean things with at least some of that money.
That's a bit too cynical even for me. There do still exist businesses that simply produce a product or perform a service, selling it honestly without any tricks or subterfuge.
It's realistic and not cynical at all. Just the way life really is.
Try this thought experiment: The local mafia needs to eat, sleep, buy cars, equipment. Should local restaurants, grocery stores, rental complexes, dealerships, the local Home Depot, accept their patronage? Is that dirty money or clean money? Is Home Depot morally corrupt for selling lumber to the mafia boss for his new deck?
The answer to these and more is: no. Moral policing never works. We have laws and law enforcement to pursue criminals. It's their job, not Home Depot's.
That's funny! Did you try it? I find it impossible to get any useful information out of it.
Literally the first ad that Facebook shows on my "newsfeed" is by "Jellop", but when I search for Jellop in their ad library, I get no results. In fact, I can't seem to get any results at all, regardless of what I search for, or what I select as the country.
Perhaps I misunderstood. The tool shows any ads on political and social issues, which should cover all the ads you mentioned (e.g. Exxon, Koch, insurance cos.), but won't cover Jellop ads.
That is a fair point! I did mainly mean to focus on social and political issues. I guess the question is what counts as transparent. In my mind, it should not be as easy as creating a new "page" to hide the connection between different sources.
Perhaps I'm being cynical, but my impression is that FB is aiming to provide just enough information that people will say "they're doing that already", and not enough that people can really understand what's going on. But I might be wrong.
The Ad Library claims to show all active ads, not just political/social ones. That's presumably true, but hard to verify or make use of. The "Jellop" ad that I couldn't find was missing precisely because the page name was something else they'd created for that ad.
As much as I dislike facebook, I think the root problem is the lack of critical thinking many people have. Our education system that's optimized for standardized tests don't really help with that.
For example when I state opinions I don't expect everyone to take them as fact without doing their own research. But that's pretty much what a lot of people do.
Doing your own research should be the norm (your already using the internet!) with any opinions you read like on Facebook. But there are sources you should be able to take seriously like the government. However, there is one government official that straight lies on Twitter all day and a good majority of the country believes him. When you believe that it's a lot easier to make the jump to believe everything you see on Facebook.
One thing Facebook should do is make the news feed fully configurable and deterministic so you only see things from people you know in a non biased way. That will never happen since that's a big part of their business model.
As much as I'd like to hammer on Facebook for this, they're just one of many, many questionable actors here.
Almost 10 years ago, I conducted an experiment. I watched an hour of CNN every night but it was never that night's coverage. It was from exactly two weeks ago.
It was amazing how much "breaking news!" was irrelevant or just outright wrong, how many large trend predictions were wrong, and how many "[person] will do X" were wrong. While the predictions could have been portrayed as opinions, they were presented as facts and the obvious next steps or conclusions.
I realized pretty quickly that avoiding CNN kept out the blatantly wrong information so even if I didn't replace it with anything, I was net ahead.
A few years ago, I discovered this article and realized that some portion of it was probably on purpose:
Paying people to review content costs money and that eats into their bottom line. Efficacy and political fallout are factors for sure, but "company avoids spending money" doesn't need much further explanation.
This is no different from a company using cheaper (and perhaps less safe) parts, using the cheapest call center they can (and letting customer service suffer) or any other way of cutting corners to save a few cents.
IMO, for two reasons. First, because some facts, especially in news stories, are easily back-and-white: The car crashed. The person died. Some facts are greyscale. The trouble you incur from fact-checking the greys isn't worth the time, money, or effort.
Which leads to reason number two: Facebook is less Algorithms Will Save The World™ than Google, but in spite of its mantra of "connecting people," it's not all that interested in interacting with actual people. Wetware is messy, time-consuming, and expensive. And while there are plenty of companies that find success working with people, that sort of work doesn't fit into its SV Bubble mindset, or lead to hockey stick-style revenue.
The answer is obvious: Facebook loves misinformation.
Facebook's business model is fairly straightforward: they want you to spend as much time on Facebook as possible so you can consume as many ads as possible. Contentious, provocative content drives engagement. It's easier to provide an unending source of engaging content if you don't restrict that content to the set of things that are true. Truth has the unfortunate tendency to be boring in most cases.
If Facebook nuked all the lies, Facebook would be less interesting and ad revenue would go down.
Ah yes. Let's have Facebook be the arbiter of truth; after all, their behavior so far is nothing short of the pinnacle of ethics, wisdom, and transparency.
Let's not worry about how that could backfire at all or the countless times that the "correct" opinion has turned out to be wrong. The truth is too flimsy to stand on its own. We must protect our inferiors from wrongthink.
As long as Facebook is such a large part of how a large number of people interact with each other and share information, the power or risk of their power is great.
So perhaps the question is - where is the greater risk? That they merely facilitate the spread of misinformation as a neutral party, or that they take a opinionated stance to reduce the spread of misinformation, but hold the power to bias that filter?
I don't like either option, but I also don't see the multitudes of individuals choosing to largely choose an alternative for their interactions any time soon either. An ideal for me would be a much more decentralized internet, but there are lots of initiatives to make that happen that are arguably much less successful than Facebook's grasp on the power it holds.
That's the job of news organizations and journalists who have access to experts to verify the information. Also, it seems like Facebook is just a scapegoat here. IMHO, The real problem is that people on both sides of the political spectrum are re-sharing "news" that they have zero idea if its true or not.
This is essentially an "individual responsibility" argument.
Do I think people "should" be individually responsible? Yes. Do I believe that's going to happen? No.
Forty years ago we got our news from a handful of networks that stuck to "the middle". It was no doubt imperfect, but if they would post crazy conspiracy on the nightly news, they would quickly vanish as a network. On the other hand, convince a few million individuals to share a far left or far right story full of misinformation, and it works. The individuals are not "shamed" into avoiding that mistake again.
Its also a "facebook is not a news organization or a fact checking organization with access to an army of experts in every domain" argument.
>Forty years ago we got our news from a handful of networks that stuck to "the middle". It was no doubt imperfect, but if they would post crazy conspiracy on the nightly news, they would quickly vanish as a network.
I do not think that rumors and misinformation were less prevalent 40 years ago. Is there any consensus (not sparse data) on that? My hunch is that we're in a much better situation today than ever before. Aanyone with an internet connection and some free time has the ability to fact-check a piece of news. This was not available to people earlier, so they had to use their judgement and either choose to believe it or not.
Only allowing "true" information on any platform (or in any form of discourse, really) is effectively impossible given the subjective nature of truth. Philosophers have spent their lives arguing this topic, a tech giant isn't about to answer that question.
Agreed there is no guaranteed way to only allow truth. However, there are many easy ways to remove utter nonsense. Maybe we should start with that and see how it goes.
I agree in principle (e.g., Pizzagate and similar conspiracy theories), but delegating the arbitration of facts to a private company seems dangerous territory to me. The inconvenient answer as it appears to me is that folks have to discern what is nonsense and what isn't, which leaves us exactly where we are.
>I agree in principle (e.g., Pizzagate and similar conspiracy theories), but delegating the arbitration of facts to a private company seems dangerous territory to me.
No one has delegated "the arbitration of facts" at any universal or societal scale to Facebook, nor is Facebook attempting to arbitrate all "facts" or "truth" on their own platform. The principle with which you agree appears to be the principle Facebook is actually using, and I see no "dangerous territory" in them moderating their own platform. There are plenty of other platforms where Pizzagate and other such conspiracies are taken as fact.
> The inconvenient answer as it appears to me is that folks have to discern what is nonsense and what isn't, which leaves us exactly where we are.
This is not an inconvenient answer so much as a thought terminating cliche.
I don't think Facebook has the public good in mind when they develop their product, but their bottom line first and foremost. They're a private company, they're beholden to their shareholders.
I also don't think that shifting responsibility from software to humans is a "thought terminating cliche" but a necessary step in how we orient the conversation about the mass dissemination of propaganda via tech platforms. There's a reason folks are eager to believe conspiracy theories and misinformation.
That does not sound like a good solution to the problem of powerful bad actors using social media to spread conflict and misinformation. In fact, it is not a solution at all. I don't understand this attitude of defeatism. Clearly how things are is bad. We should explore solutions. Is that too radical for you? Asking private companies to discern between nonsense of good facts is something that is commonly done with all publications - newspapers, magazines, TV, radio. This is nothing new. Healthy moderation has always been a key part of civil discourse - and FB's moderation is utter garbage. The solution? Demand they moderate less bad.
I pay a newspaper to discern the "truth" of real world events. They sell credibility. They have an interest (both ethical via their staff - most journalists want to tell the truth, and financial - why pay for a garbage paper) in being credible.
Facebook sells ads and has a history of poor decision making with regards to their platform. Would facebook improve the credibility of their platform at the expense of their bottom line? I doubt it. Facebook won't solve this problem for us. This is a human issue, not a technological one.
From what I gathered FTA it sounds like the author is putting forth the notion that Facebook is relying on old, mostly debunked theories around a "backfire effect" - when fact-checking increases polarization and results in the opposite of the intent of the fact-check.
The author then goes on to discuss how more recent studies (including their own research) and even some of the original authors of that pervious research had changed course and believe that fact-checking can indeed help course correct those misled by misinformation.
Therein lies the rub, according to the author Facebook's policies around the original research hasn't been updated to match more recent findings. This can be troubling given the stakes at hand.
I don't know his name, but what was your opinion on that guy from Google who said we shouldn't force women into STEM for the equality, but let them do what they want (which may not be STEM.)
Is the world better off without that opinion (or fact, depending on who you ask?)
You certainly can fact check the assumptions behind opinions or their sources, particularly in a venue where the intent of the author is to influence the reader into accepting an opinion.
I still don't understand why social media has to censor things when it can just as well label it as inaccurate / spam according to experts or something.
Now flat earth's think they are right like Galelio.
I’ll provide three arguments. First, how will Facebook or any other entity that takes on the responsibility to fact check stand up to coordinated misinformation campaigns like “Iraq has WMDs” where an omniscient fact checker could have literally saved thousands of lives.
Second, how will Facebook or CNN or Snopes or any other entity fact check disagreeing experts on a subject - for example Sweden’s epidemiologists vs rest of the world taking opposing views on quarantine measures - is anybody at Facebook or any other place qualified to fact check Sweden’s top government advisers?
Third, how will Facebook or any single entity fact check unqualified experts on subjects? When Albert Einstein submitted his paper on the Photoelectric effect (which eventually won him a Nobel Prize) he didn’t have a Ph.D nor was he enrolled as a student - he was just an anonymous clerk at a patent office. Today’s reddit would brand him an “armchair physicist” conducting thought experiments. How on earth will Facebook or any Ministry of Truth attempt to fact check anything like this?
So, if you rule out Facebook’s ability to fact check coordinated government propaganda, government policy advisors that are literally balancing lives and the economy and fundamental advances in science, what else is there to fact check?
A fact checker can only end up becoming a re-enforcement mechanism for popularly held beliefs. Let’s ask Galelio how that worked out for him.