I can't help but feel we're in for a bumpy 2021.
I'm over and done with Trump, but it would have been a lot more effective for everyone if they had let him skulk meekly off into irrelevance.
The riots have been going all year. It was a different crowd this time, but riots are old news at this point. It wasn't a good look for the MAGA crowd and more than anything probably would have cowed them into submission.
But then Big Tech had to turn around and give everyone a reason to be pissed off again.
In particular since the "riots have been going all year" is demonstrably false nonsense. Parades and rallies, maybe. Riots? What?
And how are you even going to equivocate riots to an invasion of the physical center of the US government by whiny "patriots" who had no other plan except to waltz in and wait for the next cutscene to start?
Minneapolis, Seattle, Portland, Kenosha, DC, etc. etc. etc.
> And how are you even going to equivocate riots to an invasion of the physical center of the US government
It's just a building.
It truly gives me shivers down my spine...
The Mozilla blog entry, subtext and all, does not say that they should decide what opinions are unwanted.
Who gets to decide which is which? Why would I want to tell people they are wrong about how they experience the world when they disagree with the way I do?
It is a fact regardless of a person’s experience. The word you’re thinking of is opinion. Which is how someone interprets a fact or a set of facts.
The past four years of "fact checking" have demonstrated those are not the sort of "facts" anyone is talking about.
Unfortunately because people don't hold the same idea about what the word "fact" means, and some don't believe in the concept of objective reality at all, one person can talk of "fact-checking website" and be thinking of only objective facts, while another person hears that as "prevailing-opinion-promoting website" to be used for non-objective facts.
"Objective truth checking website" would be better. The intent is unambiguous at least. 1+1=2, pi is definitely irrational, and the Earth is not flat.
Unfortunately when it comes to things like "evidence of widespread electoral fraud sufficient to overturn the result", that doesn't apply perfectly. That type of fact cannot be known absolutely; there's always that bit of wiggle room of possibility which feeds conspiracy theories, especially in a population distressed about many issues and distrustful of institutions and each other.
Such things can however be known beyond reasonable doubt to the man on the Clapham omnibus (https://en.wikipedia.org/wiki/Man_on_the_Clapham_omnibus).
This is about facts, not experiences of the world, or opinions.
Factual statements can be wrong. If you don't believe in telling people they are wrong when they are plainly wrong, what are your criteria for telling people anything at all?
Granted, the relationship between facts, truths, experiences, opinions and voices is a complex one, but somewhere in there are still the underlying concepts of true and false, right (correct) and wrong (incorrect).
More pragmatically, pertinent to the recent USA election, unfounded ideas that are hypothetically possible but unlikely, investigated by many independent minds of good standing, and found to be lacking anything approaching adequate evidence have to be treated as settled for the purposes of moving forward on an issue. As "no, seriously, we have looked at this hard enough now that we have move forward treating it as a checked truth even while acknowledging the now-unlikely possibility that we are incorrect".
People disagree about how much effort to put into investigating such things and how to assess evidence (or lack thereof), but for democracy and social stability (safety) to continue, most of them need to agree that sufficient process has been followed at some point. The quality of process and conclusiveness are essential. This is one of the hallmarks of effective democracy.
If you want to be especially respectful of people who you believe are wrong about key significant facts, then I think it best to acknowledge their facts are not literally impossible, but have been investigated and are looking strongly unlikely as a result.
First, to be clear, that's not censorship, and it is entirely compatible with freedom of speech because it is not proposing to prevent speech, only to direct emphasis. Freedom of speech means you are free to speak and people who choose to listen are free to listen, not that you get equal airtime.
Second, we already have amplification taking place of non-factual voices. It's not like we can just not amplify once kind of voice systematically over another, because anything we do results in that, including doing nothing. It's just inevitable.
A well-intentioned free-for-all in communications leads immediately to the uneven amplifications we see now. There is increasingly justified concern that it is becoming dangerously unfactual lately, perhaps due to an interaction between human instincts and rapid changes in information technology, leading to a breakdown of society and democracy, thus the suggestion to promote factual voices.
> Who gets to decide which is which?
Something inevitably decides.
If by that question you are providing the implicit answer "nobody should", then you are favouring the current system of biased amplification, where things like outrage culture, economic incentives promoting polarisation, and widespread avoidance of good-faith reasoning about basic concepts is looking rather evident.
So we already have a system deciding which is which, even if we don't want one. It is impossible to avoid having one, even in principle. Let's acknowledge that hard reality, and discuss how to make a better system than the de facto one that emerges otherwise.
I don't know the answer of how we decide, but I think it is worthwhile to investigate the question in more detail than dismissing it outright, given what we see at the moment.
I suggest ways to determine facts and factual voices could echo systems that have been developed over thousands of years for this. We have a long history of developed legal, justice, political, journalistic, academic and scientific systems with a variety of methods for finding (or deciding upon) facts, to cope with the problem when people dispute about what is true and false.
It's long-term censorship because if you can control speech for long enough you can control the range of ideas people have the possibility to be exposed to, the overall opinion consensus changes over time toward just the allowed range of ideas, and after enough time this changes the facts in consensus reality.
Otherwise you might as well say that every form of debate, persuasion and promotion are censorship, because they have that effect.
Calling that censorship is to miss the point of the more serious thing that is widely disliked, which is the bright-line distinction between opinions that are merely rendered unpopular, compared with opinions which are actively blocked from being expressed or heard at all.
However, the point isn't to quibble over the meaning of the word. The point is, would it be harmful to functioning democracy and people's freedom of thought to "promote factual voices".
By all means argue against doing so, and be concerned that it would result in long-term censorship effects but:
I've tried to make the argument that the absence of "promote factual voices" also results in long-term censorship (the way you are meaning it), because we get a de facto biased spread of opinions anyway as a result of social media dynamics and all the dodgy actors in them.
There's newsfeed algorithms, advertising, optimising for hits and clicks and engagement, nefarious actors, paid shills, all sorts of things. And though I'm loathe to say it, a lot of low-quality trolling and thoughtless opinions which get amplified easily. But even without all those, social media has system dynamics which must inevitably result in something that governs the consensus towards an allowed range of ideas versus ideas which are discouraged.
I think we have no choice on that. The only choice is which kind of systemic bias shall we opt for.
I favour whatever structures lead to certain types of thinking (not the same as particular opinions): higher quality debate, eduction, knowledge and wisdom in society (which sounds like healthier democracy to me), as represented by a broad spectrum of people, open to all, and designed to protect diversity of thought while promoting quality of thought. I'm avoiding saying what leads to that, because I honestly don't know the answer, only that "don't promote anything" is not necessarily the answer or unbiased, especially when there's plenty of other dodgy promotion going on behind the scenes already.
Put in eumemics terms, the argument I've presented in this subthread is that you can't avoid it, you can only pretend to avoid it and/or decide not to be deliberate about it, thereby having the effect of eumemics anyway but with different parameters.
That might be the better decision, to commit to not being one of the actors significantly influencing global dialogue, but there's no point pretending it isn't a decision or that it doesn't have consequences, or that the outcome is neutral, or that the outcome is what most people actually want. (Or that it isn't a paradox.)
One way or another there are powerful actors in the system, and if there weren't it would create them. I suppose if you commit to the underlying principle of respect for people in a democratic manner, you might want to be neutral about which consequences "the people" can pursue, for example if they want war than war it is, if they want peace then peace it is. However by and large what most people say they want at large, and what they actually get as a result of their actions, are quite disconnected due to very complex chains of consequences. This is well known and people complain about it all the time.
As a result, even if trying to be neutral and respectful towards all people with regard to what the people want, it is not clear (to me anyway) that neutral communications platforms lead to what people actually want (collectively, democratically) any better than non-neutral platforms. Perheps we should build things that are designed to find out what people actually want, in a robust and evolvable manner (collectively, democratically) and choose the biases that then provide it.
A well known example is moderation: Unmoderated platforms end up causing many people to leave them or to avoid speaking up, with the result that any unmoderated platform is dominated by highly unrepresentative viewpoints of the collective of people.
Moderated platforms are accused of censorship, "view-promoting" platforms are accused of censorship in your meaning of it (as long term bias), yet unmoderated and un-promoting platforms are also censorship by that meaning because they also quiet so many voices in a systematically biased fashion. There is simply no escape from this dilemma and no honesty in pretending it doesn't exist.
It's not censorship, that's bad: it's Transparent Deplatforming™
People can really fluidly go between "yay E2E encryption" and "Facebook needs to make sure only 'truth' is allowed". Ban the lies!
Funniest thing was the Cambridge Analytica scandal, where Facebook was booed, nearly universally, because they had an fairly open API (which enabled people to consent to share their data with CA). Few tech-illiterate news articles later, and everyone is convinced that Trump won the elections because of Facebook evilness (and, of course, CA had 'magic algorithms' which could change peoples' minds en masse just by showing them few targeted ads; that's completely undeniable; "trusted journalist" said so).
> Reveal who is paying for advertisements, how much they are paying and who is being targeted.
> Commit to meaningful transparency of platform algorithms so we know how and what content is being amplified, to whom, and the associated impact.
> Turn on by default the tools to amplify factual* voices over disinformation.
> Work with independent researchers to facilitate in-depth studies of the platforms’ impact on people and our societies, and what we can do to improve things.
*go to the article for the link
Short, clear, and forward-moving (I think).
Personally, I would appreciate a legal mandate that every targeted advertisement must include a link explaining which demographics or criteria are being used to target the ad.
Ideally the ad would also need to identify which criteria were applicable to the individual who was shown the ad, and where that information came from. That requirement might be onerous to implement, but maybe it could kick in at a certain revenue threshold for the ad network.
Imagine if people had to be told that they kept seeing embarrassing ads because their bank sold their interest in [product category] to a marketing agency. Or if you could inform your aging relatives that they shouldn't trust ads which target them based on their past multilevel marketing purchases. I can dream...
I mean, apparently the solution is social networks running a private list of "authoritative" news sources and making sure the feed is weighted towards them. The three listed have a clear left-wing bias. The alternative was Breitbart, and some other network I haven't heard of, but was probably included to try and balance it out.
So it's really not "transparency" since there's no objective news ecosystem quotient out there to be audited, its just the reverse of what was used to promote fake news, now its just promoting what they think real news it.
The equation is pretty simple from my perspective.
Most of my Facebook time is pointing out to family members that the “news” they are sharing is lies and misinformation, such as “cosplay guy at the Capitol invasion was Antifa” — the evidence was pictures of certain individuals being on an Antifa web site, despite the pictures clearly being part of an article identifying the fascists.
Then there are the articles trying to twist “tracking vaccine recipients” into “surveillance vaccine recipients” and other such nonsense.
See the problem here?
1. Completely relinquish any editorial controls (which means advertising money goes bye-bye).
2. Completely sanitize things.
Smaller platforms are at lower scale and can manually moderate (as they often do currently thanks to trolling and spam)
Sounds like a win-win to me!
I'd like to see it debated on its own merits, but that seems unlikely in the current climate
So long! Glad to see your tiny market share approach 0 a bit quicker.
Despicable call to arms from a zombie company coasting off its earlier innovation.
At work we don't even care to support firefox since no one uses it lol. Sad af, but true.
If you disagree, please consider being in the same situation with the political inclinations reversed, or just in another country where you disagree with the government but Mozilla doesn't.
> It's more urgent now than ever before to throw away all this baggage and start a WWW2 without these politicized entities and with lean clients maintainable by small groups of people.
Yes that would be great. But if we want libertarian world-wide network with services and rules that cannot be attacked by the establishment, it has to be either small and uninteresting to them, or it has to completely infeasible for them to attack it. Some new ideas are needed, because free software and open standards are apparently not enough to keep liberty alive on the internet.
Indeed. Maybe communications platforms, even private ones, should be carved out of the general ability to deny service. We have similar notions already, e.g., libel plaintiffs who are public figures must prove malice.
Even if one disagrees with whatever that communication is, we should recognize that stifling it, even when done by a private entity and not the state, sows distrust.
It was good knowing you. Look at what they did to my boy...
The rhetoric used portrays an evil which speakers want to exist so they can oppose it, with no room for clarification by those targeted.
I am no fan of Donald Trump but this scares me to death. I mean that in the most literal sense.
It doesn't just get to walk away from that, even if it has additional recommendations.
Twitter should never be a public forum unless the US government feels they need the existing user ecosystem when they turn it into a utility.
This statement is false. That was my point. Mozilla explicitily was supporting censorship. So, therefore it is not "without censorship".
It seems like you are walking your original statement back though, and instead simply saying that you personally support censorship.
So great. Glad that you agree that your original statement was false, and that Mozilla is actually supporting deplatforming.
"Censorship is the suppression of speech, public communication, or other information, on the basis that such material is considered objectionable, harmful, sensitive, or "inconvenient." Censorship can be conducted by governments, private institutions, and other controlling bodies."
Anything that "suppresses" or controls speech is censorship, no matter who does it. That's the definition of the word.
Does not matter. This is still the definition of censorship, according to the relevant authorities on definitions of words.
Basically every definition that you find will agree with me that private companies can censor people.
> should be able to force
Who said anything about force? I am just saying that private companies can censor people, according to basically every definition of the word. You can have whatever opinions that you want on if censorship is good or bad.
I hate to say it, but I think that big tech feels responsible for radicalizing folks, and Trump is the perfect scapegoat. Is it really crazy to believe that tech wants to continue using their algorithms (known to cause radicalization) and are using Trump as a scapegoat?
I'll be switching my VPN to straight Mullvad instead of to a group that sold it's soul to Google. Same tech, less baggage.