We're all sharing the same links, articles, jokes, memes, ideas, cat pictures. What differentiates a place is its sense of community, not its content. It's easy to forget that because Twitter long ago stopped being a community.
Given only a fraction of account bans make the news, the problem was probably much bigger than the news indicates. Maybe 10-100x (what's the ratio of newsworthy to regular folks?). Also, when a prominent user is banned, others see that and clam up on the topic lest they face the same fate; they tread on egg shells, which doesn't constitute a healthy community for them.
> Cloth masks are ineffective against covid, that's a true statement, every doctor in 2019 knew it, and now I'm banned on twitter for saying it? Interesting.
Keep in mind for every famous ban, there might be a few orders of magnitude more bans of small accounts we don't hear about given Twitter "use(d) Artificial Intelligence to identify posts" "that are misleading enough to cause harm to people" [2].
Also keep in mind this is one topic of thousands that could get an account banned, there is no shortage.
Please keep in mind my broader point is simply Twitter might have felt like a community for some, but not for everyone. If someone's banned for stating hard truths, that's frustrating to say the least, and I think to some extent makes the individual feel like it's a not a welcoming/fair community to be part of.
The Elon era seems to emphasize hypocrisy and censorship at the command of authoritarian regimes, while Elon himself posts disinformation about election laws in the US.
>The Elon era seems to emphasise truth over sensitivities
I've never heard anyone claim that misinformation went down with the acquisition--Musk infamously gutted that department, and I've seen plenty of claims at least from journalists and the EU that the problem is now worse. Anecdotally, I left Twitter not for ideological reasons, but because my timeline become filled with algorithmic noise from what were clearly bots I wasn't following. Do you have evidence to back up that claim?
>some suspected (but couldn't prove) being shadow banned for having contrarian viewpoints.
I would think we would have seen actual indication of that by now if that were true; given all the "Twitter files" and whatnot that turned out to be a nothingburger, if there was a there there, employees were certainly motivated to air it after the acquisition, if not before. In any case, it's certainly indisputable that Musk has been more than willing to artificially inflate accounts he wants to promote (the latest I know of being the Mr. Beast debacle), which, outside of expressly labelled advertising, I don't remember being a thing before, though again, I'm open to the possibility of being wrong about that.
Community notes are great and an important tool to fact check disinformation from politicians and VIPs to which traditional fact checkers often turn a blind eye. I say this from Brazil so your YMMV. While I'm not familiar with the Mr. Beast debacle on Twitter he and his content are extremely popular so it's natural for his content to be recommended. What wasn't "natural" was Twitter recommendations prior. Though there are many other reasons for not using Twitter, Reddit, Instagram, TikTok, etc.
I can’t stress this enough, for anyone who (like myself) for some reason didn’t think this was possible: Twitter’s community notes are no different from tweets in their ability to spread misinformation.
What makes them worse than tweets (at least the original immutable tweets) at combatting intentional disinformation is that Birdwatch notes 1) are visually formatted to communicate an impression of absolute ground truth, and 2) don’t reflect any controversy or edit history.
A trending tweet with a false “correction” (such as the one that claimed a video of Xinjang police brutality against an Uyghur was showing Taiwanese police) would be viewed by millions of people, 99% of whom would read that note completely uncritically, before it would get corrected. The people who are equipped to recognize the lie, and who care to fight to get it corrected, are few compared to the army of internet trolls spreading that lie.
Eventually the note may get rewritten—but at that point the tweet is no longer trending; the operation was a success and no one even knows that it took place. Since the note gets rewritten with no history maintained, the only evidence of the original malicious, false correction would have to be in that updated note, which is obviously an unpopular choice because it makes the new note harder to read and makes future readers do extra work to untangle what happened there.
(Incidentally, one of the things that could reliably be used to combat bias—labeling tweets from accounts that are known to be associated with governments or such—was nuked by Elon right away.)
As I said your mileage may vary. This looks like something that can be improved and an exception. I'd rather have imperfect decentralized information checking than centralized information checking that is know to be partisan, biased and easily bought. But please downvote me more as it shows that what you're truly in favor is censorship and monopoly over narrative.
I do not deny your experience and I saw many useful community notes before. It took me seeing this case to understand that it is actually a bad idea, for the reasons I mentioned (illusion of absolute truth while being open to manipulation & showing no history of controversy).
Also, it is not technically decentralized, it is Twitter (a centralized platform)… If it were truly fully decentralized, it would be vulnerable to such attacks even more, right? If you are up against a totalitarian government controlling the 2nd most populous country, there can always be more people who claim the false correction. There was a minority of people who got the correction fixed, and if it was actually decentralized then how would they be able to?
> it is not technically decentralized, it is Twitter (a centralized platform)
Community notes are generated by users and not the platform it's not perfect but better than having a mainstream media oligopoly deciding what is truth and what isn't.
> If it were truly fully decentralized, it would be vulnerable to such attacks even more, right?
The algorithm tries to prevent this kind of abuse. "the Community Notes rating algorithm explicitly attempts to prioritize notes that receive positive ratings from people across a diverse range of perspectives". See Vitalik analysis: https://vitalik.eth.limo/general/2023/08/16/communitynotes.h.... But from what you're telling it looks CCP found a way to game it.
Good point that the algorithm tries to compensate for the perspectives, but I’m sure it still comes down to a popularity contest.
Generally, the platform determines the algorithm of which note wins, so that’s centralized. The algorithm depends on what kind and how many users vote and how. Those users exist on the platform which requires registration and can deny any given user. Centralized.
Further, no guarantee that the actual algorithm in production matches the one made public, but I guess they have no reason to lie here.
> from what you’re telling
It’s not just me telling, the tweet has been up so far and community note does its best to awkwardly convey the controversy that would’ve been otherwise completely lost due to the ill designed way community notes work.
> Good point that the algorithm tries to compensate for the perspectives, but I’m sure it still comes down to a popularity contest.
It isn't clear if polarization score have only one dimension where it would be great to capture US culture wars but fail to capture nuances outside of that or if it's more complex than that.
> Generally, the platform determines the algorithm of which note wins, so that’s centralized. The algorithm depends on what kind and how many users vote and how. Those users exist on the platform which requires registration and can deny any given user. Centralized.
Yes, not perfect but still better than the traditional media oligopoly.
> Further, no guarantee that the actual algorithm in production matches the one made public, but I guess they have no reason to lie here.
The algorithm and the data are open. It is reproducible.
> It’s not just me telling, the tweet has been up so far and community note does its best to awkwardly convey the controversy that would’ve been otherwise completely lost due to the ill designed way community notes work.
> Original video, which provides a better look at the plate (鄂F1573警),indicates that the video was shot in Hubei, China
This means there was yet another update since I looked. The previous note at least made the user suspect that there was an attempt to point fingers at some other country. Now it merely corrects the location in China. It is good to link to a higher quality video, but I don’t rule out that the note will with time drift to suit the agenda of the government.
> Many here are biased against Elon and Twitter for political reasons so they are too quick to pass judgement
He killed the feature that labeled accounts associated with governments though.
I think community notes are not his invention so I don’t blame him for them, but they are very poorly implemented and are strictly worse than tweets themselves.
If they applied the same algo to weighing tweets and replies, they could’ve gotten the same results but without making people trust blindly. But of course this defeats the point of paying for Elon’s blue checkmarks.
> This means there was yet another update since I looked. The previous note at least made the user suspect that there was an attempt to point fingers at some other country. Now it merely corrects the location in China. It is good to link to a higher quality video, but I don’t rule out that the note will with time drift to suit the agenda of the government.
I didn't calculate any statistics but exploring the data I saw are way more "anti-China" notes than "pro-China" notes.
> He killed the feature that labeled accounts associated with governments though.
Yeah because Western government funded medias cried rivers when they were correctly labeled as such.
> If they applied the same algo to weighing tweets and replies, they could’ve gotten the same results but without making people trust blindly. But of course this defeats the point of paying for Elon’s blue checkmarks.
Doing so wouldn't make sense as the algorithm needs prior data from tweets to calculate ratings. What are your expectations? That Twitter hide (soft ban) tweets/accounts that an algorithm labels as misinformation because it was massively flagged? That happened before, it was easily abused, it was censorship.
> I think community notes are not his invention so I don’t blame him for them, but they are very poorly implemented and are strictly worse than tweets themselves.
You could give only one example where community notes were abused to spread misinformation and with time the correct note prevailed.
> exploring the data I saw are way more "anti-China" notes than "pro-China" notes.
Maybe that is the problem, it is seen as pro/anti X instead of facts/lies.
> You could give only one example where community notes were abused to spread misinformation and
If you ask this, you miss the point. How exactly do you expect me to tell a true note from a false one? The medium is the problem here.
> with time the correct note prevailed.
As long as it prevails before the heat death of the universe that’s OK, right?
> Doing so wouldn't make sense as the algorithm needs prior data from tweets to calculate ratings.
Twitter has prior data from tweets. I don’t get it.
The algorithm they use to present one community note can be used to capture feedback and sort tweets instead. Problem solved. People have better access to balanced views but are not being nannied by the platform or elonsplained what is truth.
> Maybe that is the problem, it is seen as pro/anti X instead of facts/lies.
What I meant is that there are more notes correcting pro-China lies than anti-China lies. So the removal of community notes would benefit pro-China propagandists.
> If you ask this, you miss the point. How exactly do you expect me to tell a true note from a false one? The medium is the problem here.
Use your head. Do your research. Trust your guts. I think you're expecting impossible things from technology.
> As long as it prevails before the heat death of the universe that’s OK, right?
No, it's not right. It should be quick but I have no way to tell how long it took for the better note to prevail. AFAIK it is quick enough.
> The algorithm they use to present one community note can be used to capture feedback and sort tweets instead. Problem solved. People have better access to balanced views but are not being nannied by the platform or elonsplained what is truth.
Nobody wants that. With community notes you can choose to ignore the context but you see the tweet. With your proposal people would just no see some tweets, they would lose agency.
Exactly. Just show me tweets well-sorted and let me decide. Don’t tell me “here is truth” (which is what “here is context” is intended to look like).
> right. It should be quick but I have no way to tell how long it took for the better note to prevail
You can’t. Because the implementation is botched.
> Nobody wants that.
Nobody wants community notes as is. Anyone who wants them only wants them because it’s a great way to disseminate disinformation.
> With community notes you can choose to ignore the context
Just like you could have chosen to ignore lies in the actual tweet without any community notes adding yet another layer of lies, only harder to ignore because it is now called “context” but really is just some guy’s opinion that won a popularity contest.
The context should just be a top reply, but then who would pay Elon 8 bucks to show up first?
> That Twitter hide (soft ban) tweets/accounts that an algorithm labels as misinformation because it was massively flagged? That happened before, it was easily abused, it was censorship.
>I've never heard anyone claim that misinformation went down with the acquisition
Twitter Japan's recommendations feed was inundated with political bullshit nobody cared about.
Musk came in, fired everyone at Twitter Japan, and the recommendations feed changed to anime, manga, visual novels, games, music, and other such pop cultural subjects that everyone cares about. Japanese Xers love Musk for cleaning house.
The change was obvious to all, and personally I finally could justify making an account to better manage the handful of accounts I always kept tabs on (news postings from games I play, new art from illustrators I like, etc.).
Speaking for the fediverse only, the experience is what you make of it. Stay on the largest, averagest instance and your experience will be bland; get on an instance with people who share your values, an instance that connects with other instances OK with you, follow accounts posting content you want to see and the experience will be much better.