I used to agree with you until I created a new Twitter account from scratch.
I'm not American and I specifically put non-political things in my interests. Yet, the second I signed up I got the following:
1. A notification about a smug reply a rando made to a Republican Congressman.
2. Posts from a meme page with a Pepe the frog avatar showing homeless people fighting in San Francisco.
3. Somebody I don't follow accusing another person I don't follow of being a nazi.
The problem with Twitter is that it needs high engagement, so it strongly recommends posts that are low on quality but high on emotion. This gets people to post the most smug and controversial takes they can handle.
I recommend everyone creating a new social media account every once in a while to see what the rest of the world see. It's as enlightening as browsing the internet with Adblock disabled.
Everyone takes shortcuts these days so by Godwin's law directly calling the other person on the internet with which you disagree a nazi is peak optimization /s
On a serious note I think the problem is actually the heavy moderation and algorithmic bubble creation.
Then you had certain famous and controversial people who operated above all bubbles and the different bubble started clashing.
The only way I see to fix this is if the "middle" is one bubble. And no, that's not a middle ground fallacy, I dont think the middle position is the correct one but rather that a strong middle bubble is required to separate the majority from extremist bubbles.
Unfortunately a middle bubble required people to be tolerant. But many people lost the concept of disagreeing but supporting that someone should be able voice their opinion anyway. Musk actually understand this so I'm not overly optimistic but I'm curious to see what he will do.
On a serious note I think the problem is actually the heavy moderation
I think the problem is crude filtering falsely labeled moderation. What you'd want is a moderator who demands decent quality arguments on the topic at hand.
What you have is just removal of everyone not agreeing with a given viewpoint and allowing intemperate advocates of the given viewpoint to make the position garbage.
> disagreeing but supporting that someone should be able voice their opinion anyway
It's not just "disagreement" but the death threats, slurs, allegations of "grooming", racism, libel, and simple fraud (especially involving cryptocurrency).
Heck, Musk himself posted (and deleted shortly afterwards) libel about Pete Pelosi.
If it isn't a crime or spam then it is in fact disagreement.
There is no inherent different from calling someone a "racist" or "nazi" to calling someone a "groomer". On Twitter these "slurs" are used by different sides (bubbles) but other than that there is no difference.
It might be a baseless allegation or there could be more to it but either way it is clearly an opinion and disagreeing is fine no need to prevent anyone from voicing it.
>Musk himself posted (and deleted shortly afterwards) libel about Pete Pelosi.
I would assume he was being sarcastic by posting an obvious "whack source" with a silly rather obvious made up story. But it went over peoples head so he removed it. (But that is just my interpretation)
But whatever it was, why does it bother you? What exactly is so horrible about posting something "wrong" and deleting it?
What is so hard about the concept of allowing people to be wrong about things? Where does the "entitlement" to demand people to get everything right come from? You don't apply this standard to yourself don't you? And if you are wrong about something would you rather want people to tell you/debunk your claim or would you want your "wrong think" to just be removed?
I've yet to see that horrible combination of letters that is too dangerous to face the scrutiny of people and calling it out. Lame slurs certainly dont pose any threat to a stable individual and I dont think we should adapt to the unstable that just make everyone more and more intolerant.
> But whatever it was, why does it bother you? What exactly is so horrible about posting something "wrong" and deleting it?
It wasn't just "wrong", it was a propaganda article designed to dismiss a right-wing terrorist attack that almost killed the husband of the third highest ranking official in the country by painting the attack as a "false flag" designed to cover up homosexual infidelity. And Musk is the richest man in the world with enormous, unimaginable reach and influence. For better or worse, the more power you have, the more social obligation you have to use that power responsibly, and that includes understanding how your words will affect the thoughts and actions of others. Musk is under fire because he refuses to use his power responsibly.
As I said I think Musk posted it as satire. The article read as if it was from Babylon Bee. At no point I even questioned if he was serious about this. He does troll like that all the time, its really silly to blame him when after all this time people still take everything he says at face value.
If you actually read that article (and not just the media complaining about that he linked it) and came to the conclusion that Musk though the story was legit, then maybe reevaluate the intelligence you attribute to him. You may not like his opinion on things but he can not be that stupid.
Also the "right-wing terrorist attack" is you spreading propaganda.
People are jumping to "false flag" stories because the story as presented is simply unbelievable (the third highest ranking official in the country has zero security?), and the media is being evasive (Politico reporting a third man opened the door, then reported that it was a baseless claim!), and the media is also aggressively eager to portray a nudist hemp jewelry maker illegal alien as a MAGA loyalist.
If the story were straight, and reported honestly, some very few would remain suspicious.
It really isn't unbelievable. A mentally ill man was radicalized by right wing propaganda and went to a lawmaker's home to harm her. Her security wasn't there because she wasn't there. End of story.
You're looking for conspiracy where there is none. There is nothing "suspicious" unless you're trying to deflect blame.
> Her security wasn't there because she wasn't there. End of story.
Instead of waking Paul up and asking for Nancy so he could attack her, suppose he had planted listening devices, or explosive devices, and crept away unnoticed? But you're telling me it's perfectly reasonable to leave the home unguarded when the third in line for the presidency is away. Doesn't that seem like a horrible national security vulnerability?
It could possibly be true, but please, be honest enough to agree that it sounds unlikely.
Yes, I agree. Those MAGA republicans who are Bay Area nudity activists, living in homes with BLM stickers, LGBT Pride flags, anti-capitalism bumper stickers, and “Berkeley Stands United Against Hate" signs in the windows are very dangerous, and there's no telling who they'll attack next.
> What is so hard about the concept of allowing people to be wrong about things?
People act on that basis? If all this stuff was totally disconnected from the real world, then sure yeah whatever, but it isn't. The point is that Pete Pelosi was attacked by a terrorist who'd be radicalized by wrong information.
The "groomer" stuff worries me because I know enough queer people who are now more likely to face real, physical violence as a result.
But it's not just twitter. Some of this stuff is in "mainstream" media and politics now.
I'm only OK with you being wrong about things if you're willing to sit in an inaction bubble and do nothing which can affect me, including radicalizing others to your cause who will do things.
I'm not responsible for people acting on whatever basis they do unless I actively blackmail or incite them to do something.
>The point is that Pete Pelosi was attacked by a terrorist who'd be radicalized by wrong information.
Where exactly is the point in this? Clearly this happen before Musk's post about it so where exactly is the connection?
>The "groomer" stuff worries me...
I could tell by reading your post but I dont see any argument contrary to what I said. Its just another insult/not nice thing to say/a lie/etc. but there is nothing special about it.
It might "worry" or rather "bothers" you more that other words but that is purely subjective, we all have some kind of personal ranking of "slurs". But what matters is that I would not want the words on the top of my list banned. Not even if used against me.
>... and do nothing which can affect me...
Then we could not have this discussion because clearly it will affects you. I can try to not affect you negative but it might not work and trying is complete voluntary, its up to you to not participate in this discussion if you think I'm not interested in a civil rational exchange of thoughts. You can not demand from me or from anyone to limit their speech to what does not affect you negative. That is impossible to practice.
What others do is even more outside my control. I am not responsible for other peoples action (with certain limitation see first sentence). And I'm not going to limit my speech or want anyone else to limit their speech as a "preventive action" to stop a hypothetical person doing something wrong. That is entirely backwards, the open dialog, scrutiny and tolerance is what should make people not becoming extremists. Preventing people from having their views challenged isn't going to help anyone in the long run.
So you would have no issue with a concerted, national effort backed by politicians and the owner of the largest public square on the Internet to label you a pedophile and an enemy of the people? And you would react kindly to the wave of death threats and potential violence towards you and your loved ones that would come as a result?
This is totally silly, these are slurs/insults not acts of violence or incitements.
Also why is "groomer" apparently "incitement of violence" but directly saying "pedophile" isn't?
How can one word put people in imminent danger but the other not if they are almost interchangeably used?
Needless to say that if a word is banned people just use a different word or they use codes.
So if you are actually worried about people getting hurt you know that banning word and related account suspensions isn't doing anything against actual violence.
> Also why is "groomer" apparently "incitement of violence" but directly saying "pedophile" isn't? How can one word put people in imminent danger but the other not if they are almost interchangeably used?
That's the point. Both put people in danger when used maliciously. If I told all your neighbors that you're a pedophile and a danger to children, you would feel uncomfortable and probably unsafe. Now imagine that I told all your neighbors, and then started putting up flyers all around town. Would you feel safe? Do you think you'd be able to convince your town that you're not a pedophile? Just saying "more free speech defeats bad speech" doesn't really work.
I'm not talking about banning words. I'm talking about moderation to deal with targeted campaigns of misinformation designed to put people in danger by powerful people with huge platforms.
> these are slurs/insults not acts of violence or incitements
When your rhetoric also often includes violence, they are incitements. "There's a pedophile living in your neighborhood, snatching up children left and right, right under your nose! We must act as We The People to defend our way of life against these groomers!"
You are confusing speech on twitter with real life actions that include speech.
Going to my neighbors and lie about me is a real life action with speech. On top of that you force them to listen to you (at least temporally) AND it is targeted specifically at my neighbors which means you have an intent beyond informing people from the general public who happened to voluntary listens to you.
If it would actually be true then you could make the claim that your intent is to protect my neighbors kids (but you should ofc instead report it to the police) but if it is a baseless false accusation or lie then your INTENT was to harm me and that is not comparable in any way with a "slur", "insult" or "mean speech" its an actual criminal act.
>When your rhetoric also often includes violence, they are incitements. "There's a pedophile living in your neighborhood, snatching up children left and right, right under your nose! We must act as We The People to defend our way of life against these groomers!"
Nothing is this example is even remotely criminal, there is no way to find intent to harm someone in that. It could be boiled down to "lets protect our kids from criminals" A (generic) call to action but not to violence.
Its ofc silly to post something like that on twitter but it wouldn't even violate twitter rules. The word "groomer" on twitter is only "banned" if directed against a twitter user or other specific non-anonymous real person.
What actually happens on twitter is people are filmed acting and saying weird, creepy or sexual things around/to kids (Even the POTUS did that) and then people call them "groomers", "pedos" and other insults. I dont see the point of doing that but its definitely not criminal and not an incitement of violence its an insult and expression of disgust.
> I'm not responsible for people acting on whatever basis they do unless I actively blackmail or incite them to do something.
If you make false accusations about people, and that results in them losing business, or being harassed or physically attacked, then you are responsible.
That is wrong. Its not the result that matters but the intention.
I can say whatever I want and you can claim it direly affected you or your business but you need to prove malice/intent beyond reasonable doubt.
There mere fact that an accusation was false is not enough.
Beside that no statement (false or not) justifies a physical attack so there is no legal responsibly for such an actions. Even if the attacker says he did it because of the person who made a statement.
Exceptions are ofc increment/blackmailing someone to do it/offer a bounty for the violent action/hiring a hit-men/etc. All of these obviously have malice/intent which his what matters.
Consequently hiring a hit-men is a crime even if the hit-man fails do to any damage or never had the intention to do anything and just steals the money. Because again the result doesn't make the crime its the intention that does.
Obviously all of the above is intentionally simplified and not legal advice.
> What is so hard about the concept of allowing people to be wrong about things?
Because democratic society requires us to agree on basic facts in order to function.
"People are wrong about" things like:
- who got more votes in the last US presidential election
- whether future elections in the US will be fair
- whether global warming is real
- whether Democratic politicians are literal lizard people who molest children
We have seen that when we can't agree on these basic facts, people take actions like
- attempting to overthrow the seat of government
- attempted assassination of political figures (or their spouses)
- election of politicians who say that they will overthrow elections that don't go in favor of their party
- enacting policies that threaten to create a climate catastrophe.
The reason that Musk's tweet at Hillary repeating the conspiracy theory that Paul Pelosi's assailant was a male prostitute is a problem is because, in providing an alternative (false) explanation for the attack, it contributes to hiding the truth on this issue. The truth being that we can draw a straight line from politicians' words to this attack.
And since we vote for those politicians, if people are wrong about this, then yeah, it does matter.
Lets assume hypothetically someone disagree with your comment. That someone is an admin on HN. Do you want that someone to challenge your comment with arguments/reasoning (with the possibility that it is very bad reasoning might even contain factually wrong "facts" ) or do you want that someone to just remove your comment?
And which of these 2 options is more likely to make your and that hypothetical admin's opinion come closer?
What about 2 other people reading your post. One agrees 100% the other partially disagrees. Which option from above would be best for them? Which option is more likely to move their opinion more towards something all agree on?
If for any of the above situations you think deleting would be the best then ignore this reply and pretend the whole thread was deleted ;)
> Because democratic society requires us to agree on basic facts in order to function.
> - who got more votes in the last US presidential election
The Democrats delegitimized the outcome of the 2016 election, the Republicans the 2020, and yet we still have a functioning democracy. We're even about to have a round of elections in a few days!
> Because democratic society requires us to agree on basic facts in order to function.
“Agreement” is a voluntary act. If people don’t agree on fundamental facts, we need more dialog to reach agreement if possible, and mutual tolerance if not.
“Agreement” cannot be an involuntarily enforced mandate.
If people don’t agree, we cannot force them to agree by silencing dissenting voices. Even if such silencing worked in the short-term, it’s corrosive to society’s ability to discern truth.
This isn't just Twitter now. This is mainstream politics, it's all name calling. The UK Home Secretary blaming the "tofu-eating wokerati" from the dispatch box a couple of weeks ago.
Ehhh, correlation is not causation. Flamebait rhetoric has seen an increase in all public media, IMHO. To offhandedly attribute it to Twitter is highly [citation needed].
Yep, I discovered recently that it would constantly fill my notifications queue with "someone whose posts you looked at ages ago retweeted something you don't care about" ... and would also neglect to notify me about the direct replies I was getting to my tweets. WTF?
Are you really expecting an algorithm to make a judgement call on that? Maybe that IS how Twitter is 'supposed' to work - if it is, then I'm glad that's not how it seems to be working for me!
I'm encouraged by recent talk from Musk about 'opening up the algorithm' and even the potential of configuring it. Heck, just give me a decent front-end query language and let me build my own feed - I'd probably pay for that.
>Are you really expecting an algorithm to make a judgement call on that? Maybe that IS how Twitter is 'supposed' to work - if it is, then I'm glad that's not how it seems to be working for me!
A direct reply has higher notification worthiness than something it thinks I might be interested in. This shouldn't be controversial.
> a significant proportion of the American right wing is, by any meaningful, objective definition, substantially aligned with the Nazi party, fully on board with authoritarian fascism, and ready and willing to murder people for opposing them.
One crazy guy breaking into the house of a politician that will obviously get covered by the news versus all the antifa mob attacks that have been participating in looting, riots, and assaulting people for having "wrong" opinions that get called "peaceful" by news outlets because both fight for the bottom line of the establishment.
The AT protocol I was looking at the other day has a concept of choosing your own algorithm to aggregate content. I think that’s an important option in order for a dominant platform to be useful to society.
I think the submission is exactly wrong, the culture of an online space is heavily linked to the technology of the platform, interlinked with its business model.
I'd like to think making the platform a more positive, constructive place would improve engagement and therefore be compatible with their business model.
However, I confess I'm leaning towards the more cynical outcome.
That's definitely the problem, but the answer isn't to change Twitter's algorithms. We must ask why Twitter's algorithms ended up where they did, and the answer is that it's more profitable to do what Twitter does than the alternative.
It's not like Twitter's creators maliciously set out to become the flaming hellscape it is now. It responded to market forces and user behavior. Neither of those factors have changed, so why would we expect any for-profit public square to do it any differently?
Back in 2007 Hotmail was so big that it was profitable for them to have accounts that deleted themselves if you didn't log in in a couple of months and not pay too much attention to spam filtering. Then Gmail came along and ate their lunch.
TikTok is winning the race for zoomers' attention by doing better content filtering. I'm sure that a well-ran Twitter competitor can take all of their market.
This is a great observation and link to hotmail. I suspect there's some serious valuable lessons to be learned by brainstorming on the similarities/differences there.
In direct conflict to your assertion is that the other _more successful_ social networks don't do this, they put people in their bubbles. Tik Tok is fun to use because it shows me content I want, not the flamewar topics. Jack Dorsey specifically had this vision for Twitter and it has served to make it less monetizeable and mediocrely successful (compared to the other SN's)
My biggest issue with Twitter is not even really that the content is inflammatory or controversial, it's that seeing "a person I don't know calling another person I don't know a Nazi" is irrelevant to my life.
My interface to Twitter is through Tweetbot. So I’m not subjected to the algorithmic nonsense that Twitter thinks I should see. However I am obviously influenced by the echo chamber of the accounts I’ve chosen to follow. I think the experience of users who view Twitter’s view of what to see is vastly different from users that use 3rd party clients. If Twitter wants to increase engagement they’d probably be best to kill off the 3rd party clients.
Not to mention the people who have built their careers, personality, and livelihoods on that engagement. "Thought leaders" and "engagement grifters" are a very new (in the digital sense) and very real thing. And we can't forget the professional advocates whose job consists only of shaming, berating, and victimhood.
It's a tough pill to reconcile; make money at the expense of toxicity or find a revenue model that's benevolent. It'll be interesting to see if Twitter changes.
> so it strongly recommends posts that are
low on quality but high on emotion.
Yup.
In the 1930s movie "Meet John Doe", see
how a newspaper can try to increase
circulation by (a) publishing something, a
continuing narrative, emotional (in the
movie, a poor person frustrated with the
economy and politics and writing a FAKE
letter to the newspaper on their decision
to commit suicide), (b) creating
controversy, (c) encouraging arguments to
start, should he or should he not, etc.
The continuing narrative idea was to
encourage the controversy to run for some
months. The idea of such a narrative
was formulated by E. Bernays, as I recall,
also in the 1930s. Later Bernays was an
ad executive in the US.
Sooo, an old, 1930s or before, idea to
create strong emotions, headaches, stomach
aches, upchucks, anger, frustration,
stress, depression, various individual
psychological problems, various political
and societal problems, was for the media
to create long running, continuing
controversy.
Net, whenever start to feel emotions from
what see in the media, guess that 90+% of
the time you have just been fooled,
manipulated, exploited, jerked around by
the media looking for eyeballs for ad
revenue.
Recall the old characterization of media
values, "If it bleeds, it leads."
My old description has been the media just
wants to grab you, by the heart, the gut,
below the belt, always below the
shoulders, never between the ears.
Sooo, I deeply, profoundly, bitterly hate
and despise the MSM (mainstream media) --
they want to deceive me, steal my time and
energy. So, for some years, I've flatly
refused to pay any attention to the MSM.
Solution for the MSM and the old
techniques of journalism? Easy. People
just ignore the MSM. That will cut off
their supply of eyeballs and ad revenue
and they will go out of business.
If using a new account is so atypical that people need to be prompted to try it, then is it really what the rest of the world sees? I'd assume the vast majority of the rest of the world highly curates their accounts by utilizing the Follow feature; the effect of moving beyond the randomness should be similar regardless of whether they are a relatively new user following only a few accounts or a seasoned user following thousands.
Here's the uncomfortable question though, can a social media platform achieve sufficient scale, so its more useful for discoverability than your group chats, without optimizing for engagement?
Discord is a happy medium (just, really large group chats).
Yep, I never understood having Twitter. I read linked posts but the "Community" peoploe talk about is lost to me because I used to use old forums and now Discord.
TikTok's algorithm sounds to just be generally less transparent than Twitter's. I wonder if they've gone out of their way to just reduce the reach of political content on there in general. Or, if Twitter just acts as the grease trap for this kind of content.
Discord is actually a good example: I think I got 0 notifications about groups I don't follow, so isn't as big of an incentive to aggressively smugpost as in Twitter.
I don't join those servers, though, so I don't get notifications about their @channel messages. Twitter, however, is essentially doing exactly that to it's users. I hate being subjected to shit I don't want to see, so I don't keep Twitter installed on my phone, despite the fact that I an active in a couple of small, niche communities that can only exist at scale thanks to social media (and specifically Twitter). It's frustrating. Discord is (currently) the worst but best solution to this problem.
Why would join a discord dedicated to either? That seems like a own goal to me. If you dont want to get sketchy messages, then dont go to sketchy places.
Agreed everyone should do this. I created a burner facebook account just so I could sell things on Marketplace and the feed of clickbait I see is pretty awful. Its eye opening to see what kind of shit the platform shoves on you when they have no data to go. However I bet even small bits of info would be enough to strongly change things for the first few bits. Haven’t tried though! Giving em nothing
And why is it that whenever curiosity gets the better of me and I actually do click on one of the 'Trending now in the UK' 'hashtags' wondering what it's about... it's about absolutely nothing? Seemingly everyone's using it to talk about something completely different, none of which is 'trending' at all. Just a naive algorithm not delivering anything useful at all. Just random crap in various languages, nothing to keep me on the platform and 'interacting' at all.
(Of course I can't remember many examples, but earlier it was '#confirmed'.)
I’ve tried this the other day and besides an insanely difficult captcha that made me add up faces on dice, I had the same result as you. Overnight got spammed with low quality notifications from troll political accounts.
I wonder if there's a minimum number of accounts you need to follow to avoid that. I follow a reasonable number, but even straight after signing up, I don't think I've ever had an unsolicited notification.
If you haven't checked out Farcaster (farcaster.xyz) I highly recommend it. There is none of that nonsense, and it's a twitter-like community for builders that reminds me of Twitter ala 2010.
Conversely I signed up for Mastodon and saw a ton of awesome technical content, geeky chatter and exactly zero hostility or toxicity. Has been that way for the past few years. (Though, a smaller instance, not mastodon.social)
Mastodon really, really needs to work on its PR. Every time I see someone talking about it on twitter, they're trying to work out what the significance of choosing a particular server is. Even a message as simple as "Like email, but social" might help alleviate this.
Eh, I don't think it matters. It's not a business, and it doesn't specifically need more people to join. Over time it propagates anyways. I've personally brought tons of people to the fediverse, people who I care about and I think would benefit from it. So far, so good!
It is also related to the social engineering that has occurred over the last few decades. If the people running the show can get the plebes to focus on hating each other, they get to keep running the show.
Twitter gets bad content because Twitter promotes bad content, and that happens because Twitter's work culture prioritises engagement metrics over meaningful interactions.
The underlying problem is a problem of Twitter-the-company, the actual problem of Twitter-the-product is technical and can be solved by going easier on engagement-based recommendations.
When I first registered on Twitter I didn't understand how anyone finds it useful and left my account idle for ± 3 years.
It only became useful when I copied a highly curated follower list from a coworker. Since then I have onboarded several people to Twitter this way. They start by copying my follows to get it to a useful state.
Its quite clear there is lots of improvement space here.
Oh, then yeah that's just annoying. I was annoyed off of it not too long ago as a long-time user, and it was for political stuff for me as well. We won't have a good picture of post-Elon Twitter until the 9th, but I'm cautiously optimistic.
The problem is that nobody needs this shit. And we need a social network that doesn't have incentives to promote and create an environment conducive to toxic content.
Compare to HN. (I think) We all enjoy HN, mostly because it's quite effectively moderated and the site admins don't have the same OKRs as twitter/FB etc. do.
I don't want the network to decide what speech I want or not. Just provide me with easy to use moderation tools and I'll moderate my own feed. How about give me access to the feed itself so third parties can produce alternative moderation tools on top as well.
I signed up out of necessity because I work for a media site that gets a lot of traffic from Twitter. Loads of media orgs (good and bad) spread news via Twitter. A lot of really good journalists have very positive presences. A lot of independent journalists are probably heavily reliant on it.
It's unfortunate that Twitter primarily only has 280 characters of text to infer intent and sentiment. AI has overfit engagement around hate and polarizing text.
TikTok, and really any video-content feed, noe has deep AI signals to infer a person's true hobbies and interests.
When I first joined Instagram in ~2014, after a little bit I started getting tons of incel/RW suggestions in explorer and I couldn't for the life of me figure out why. I mostly followed my IRL friends, some environmental groups, and a couple of musicians.
One day I was messing around and learned you can apparently favorite/save (I can't remember anymore what they call it), and I had saved a post praising Vladimir Putin. I couldn't remember ever seeing it much more favoriting it, but as soon as I got rid of it, the algorithm slowly stopped suggesting far-right content.
The actual technology plays a major role in creating polarization
I'm not American and I specifically put non-political things in my interests. Yet, the second I signed up I got the following:
1. A notification about a smug reply a rando made to a Republican Congressman.
2. Posts from a meme page with a Pepe the frog avatar showing homeless people fighting in San Francisco.
3. Somebody I don't follow accusing another person I don't follow of being a nazi.
The problem with Twitter is that it needs high engagement, so it strongly recommends posts that are low on quality but high on emotion. This gets people to post the most smug and controversial takes they can handle.
I recommend everyone creating a new social media account every once in a while to see what the rest of the world see. It's as enlightening as browsing the internet with Adblock disabled.