The problem is using "trending" as a content discovery algorithm in the first place.
"Trending" just means that the topic is being referenced by the largest volume of accounts. Even if you can say with certainty that every one of your accounts represents an actual, legitimate person, you still have the problem that all "trending" does is surface content that's preferred by whichever of your users can mobilize the biggest angry mob. And if you can't say that with certainty (which bot-flooded Twitter absolutely cannot), "trending" becomes a wide open battlefield, trivially subject to exploitation by whoever has the largest budget for sock puppets.
"Trending" isn't an editorial strategy. It is the abdication of editorial strategy. It's being asked by Twitter among others to do much more than it is capable of doing, and their users suffer as a result.
> "Trending" isn't an editorial strategy. It is the abdication of editorial strategy.
Not sure I understand your point. “Trending” is exactly what it says on the cover. On the other hand, calling topics picked by the editor trending would be a bit of a perversion of the meaning of the word, no?
Trending topic is a bit of a misnomer. Twitter isn't identifying "topics" here, per se; they're just extracting the most common substrings verbatim from recent tweets.
A human editor might see a topic like this becoming popular and summarize it as "Synagogue Vandalism". They wouldn't just present it as "Kill All Jews" with no other context.
Trending has connotations of acceptance. When wearing jeans is trending, for example, it might mean that jeans are “good” and “acceptable” to wear.
The “trending” you’re referring to would be better described as “high traffic”. There’s a hundred thousand people on the roads rushing to get home right now, that doesn’t mean that traveling by car is “trending”.
Well if it's not what they're doing already they really ought to think about it. Twitter is supposed to be the "safe" micro-blog social network, where one gives up complete free speech in exchange for being shielded from extremist hate speech. The supposed benefit of using Twitter over complete-free-speech alternatives like Gab or certain Mastodon instances is, users are supposed to be shielded from the extreme hate speech that is permitted on those other platforms. If they can't deliver on this to the point where users are seeing "Kill all Jews" trending, then the only edge Twitter has over its competitors is user count, and that is subject to change if they fail to deliver on their other supposed advantage.
Basically if you tell me the reason Twitter is better than Gab et al. is that you don't see hate speech on Twitter because that's their policy, yet a user logs in and sees the most blatant stereotypical hate speech possible trending right there at the top of the page, you're not really selling me on what you tell me is supposedly better about your service.
> Twitter is supposed to be the "safe" micro-blog social network
Strange, I have the opposite perception. On Twitter, I am constantly confronted with "kill the jews" kind of hate speech. If you post something with a remotely controversial hashtag, you'll get frothing responses. Oh, and on some days searching for innocent terms gets you tons of NSFW nudes - if you are outraged by that kind of thing.
OTOH, Mastodon is advertized as "safe" alternative - or lets better say "comfy". Whether you feel comfy by being able to express radical free speech and not having to deal with identity politcs, political correctness, "SJWs" and so on. Or whether you are on the other side of that debate and don't want to be bothered by hate speech, misogynism, and other abhorrent things - a loosely federated network like Mastodon offers places for both groups.
Who said Twitter was "safe"? Twitter blocks some of the extreme end, but not nearly enough to call it "safe". Twitter hosts credible threats by people who have attempted murder, like the recent pipe-mail-bomber.
They've been trying to position themselves as "safe" for years now with their "Trust & Safety Council" and other such nonsense. They repeatedly use "safe" and "safety" in their press releases about their new policies.
The replies to my previous comment including yours seem to be missing the point: of course Twitter doesn't remotely begin to deliver on any of these promises of "safety," yet Twitter is still presented as the "safe" alternative to Gab et al. Personally I don't mind seeing "hate speech" because a.) I see hate speech about my own race, sex, and religion on a nearly daily basis at this point and b.) I realize that "deplatforming hate" doesn't do anything but cause it to coalesce elsewhere.
By showing that Twitter isn't as "safe" as they claim to be, I'm questioning why we're continuing to use it instead of either a completely-free-speech or a better-curated actually-more-"safe" alternative.
Twitter needs some kind of negative feedback mechanism to allow for the discovery algorithm to correct for the sentiment of users. A downvote, dislike, un-heart, something along those lines, would be a good signal to allow users to signify what kind of content they deem objectionable.
Downvote-type systems aren't that effective since they usually end up becoming a "I disagree" button and multiply the effect of whatever mob is most active at the moment. It's one of the reasons HN locks them behind a karma wall.
Even that is not an effective way to avoid the negative effects of the downvote button, especially not here on HN where it actually is meant to be used as a "disagree" button:
The "karma wall" only keeps the downvote option out of the hands of those who follow short-lived "mobs", it does nothing to stave off the balkanisation of the platform.
Another way to deal with this would be to not apologize, but to insist the algorithms are imparital, and to condemn their users. Call a big press conference and say, shit, we found that a big portion of our users are antisemites. Put that trending topic behind a trigger warning and maybe add content disclaimers to antisemite's pages. They could say they are protecting freedom of speech, but also the other side of the medal - freedom to call out positions you find wrong.
I understand of course that that ship has sailed.
Howver, I also believe that the strategy of "keeping the lid on" positions that are hateful or morally undesirable does not work long term. It was completely unthinkable and taboo to voice nazi positions in public in post WW2-Germany, it is even forbidden by law. But this did not irradicate Nazi thought, it just pushed it underground. And look, suddenly with some historical distance it is coming out again. Same in the US with people running around in KKK hoods and waving the confederate flag. Of course, racism was never gone or solved (no matter what we thought as white kids growing up in the 90s), but now it seems to be OK again to be openly racist in certain circles.
To tie it back to the topic, I believe we should not sweep positions we find horrible under the carpet, by saying they are taboo, and pretending we (e.g. Twitter) are impartial and unpolitical. Rather, we have to endure that people hold these positions, and speak out against them. Even if that means that we no longer are "politically neutral".
You contradict yourselves. Anti-hate-speech laws and culture suppressed the spread of ideas. That's the concept of "chilling effects" (from the other side). If suppressing speech didn't matter, no one would be worried about it being suppressed.
With the rise of non-suppressed hate speech on Reddit and Twitter and Fox News and the like, we saw a resurgence of terrorism in the US, terrorism which was very clearly inspired by the hate speech (specifically, it's conspiracy theories) and spread ideas that people wouldn't have thought up on their own.
I'd say it suppressed the expression of ideas, but it was not successful enough to suppress the spread. It certainly did not extinguish the ideas.
I would not say right wing terrorism was only inspired by recent online hate speech. The US always had militias, people like Timothy McVeigh, white supremacists, and so on. However, I think that social media made it impossible to "keep the lid on" these ideas. Before we could censor this pretty good in the mass media, but then mass media and mainstream society lost their monopoly of interpretation. Now people with nasty ideas realize they are not alone and come out, and organize.
>If suppressing speech didn't matter, no one would be worried about it being suppressed.
We could worry about it because the principle might one day be used against us. It's one thing to ban swastikas, but who gets to define what counts as "terrorist"? We already see that being abused to suppress dissidents abroad.
The chilling effect is entirely about the speech suppression. One of the side effects of speech suppression is that one no longer knows what people really think, and there isn't good evidence that silencing someone changes their mind, which is the really important thing. There were vigorous efforts to silence the Nazis for example, and none of it mattered, because all speech control efforts target symptoms and not causes.
About right-wing terrorism, pretty sure that's independent, as it has always been there.
This notion of free speech that would even allow calls of violence sound immoral to me.
Calls for violence are not protected free speech in The Netherlands and I guess most (West) European countries.
You would get fined or even jailed if you would say shit like this in our country. And I'm happy for it. Because it serves no purpose to incite violence. There's no debate to be had. There is nothing that can be debated upon. What is there to debate with people that call to kill all Jews.
No, people can still have their free hate speech, but ideally not on any major social media outlet or news outlet if possible. And anyone calling for the murder on another person belongs in jail.
This exactly. Allowing violent speech to fall inside our definition of "free speech" has a net negative impact on speech.
Put another way: why would I paint a target on my back by trying to debate people who actively advocate for my murder? I'm much more likely to shut up and stay safe.
Yeah I mostly agree, but I'm not suggesting to leave calls to violence in the open. Rather, replace the tweet in question by a notice saying basically
"User XY called to violence against people here. We find this dispicable and totally oppose this kind of ideas. Signed, Twitter CEO. PS: To publically document what kind of person XY is, you can click [here] to read their hateful statement in original."
I'd much rather have Nazis behind a content warning and publically pillored and shamed on twitter, than ban them and let them fester unbothered on Gap.
Algorithms are trained by real world data. If Twitter claimed that it is 'impartial', it would be fundamentally false (you cannot create a perfectly representative sample, and god knows how many propaganda bots are pushing up their DAUs).
Worse still, it would communicate to advertisers that if Twitter's algos can be gamed so easily, its user base is 'low quality' and not worth serving ads to.
> Another way to deal with this would be to not apologize, but to insist the algorithms are imparital,
But since, as they admit, they have historically intervened in the trending rankings, insisting that they are impartial would be lying.
They could decide not to intervene this time, but that would raise he question of why this is less deserving of intervention than past issues, if they were being honest and open.
> It was completely unthinkable and taboo to voice nazi positions in public in post WW2-Germany, it is even forbidden by law. But this did not irradicate Nazi thought, it just pushed it underground.
That is precisely the point of these laws. Certain hate speech and anti constitutional propaganda is marginalized in order to not allow further recruiting and preventing these stances from becoming part of a mainstream debate again.
It's an institutionalized (and therefore controversial) form of social control. The social control of ostracizing certain type of speech is present in all societies, though. If you're among ardent Christians, cursing and blasphemy is going to be 'prohibited', namely sanctioned. If you're among communists, other types of speech are prohibited. In many societies, you are not allowed to talk about certain taboo topics like sex, death, badmouthing someone's parents, etc.
Certainly nobody has ever thought that German anti hate speech laws would make Nazis go away. They were invented to prevent a new Nazi party from rising again (together with other mechanisms like party prohibition by the Supreme Court) - and also, on behalf of the allied occupation forces - to prevent further embarrassment of West Germany abroad.
On a side note, it was never "completely unthinkable and taboo to voice nazi positions in public in post WW2-Germany." It wasn't done in the GDR because of Gulags, whereas in West Germany this was always pretty common. There was even a successor party to the NSDAP that was promptly prohibited. There were right wing terrorists ("Wehrsportgruppe Hoffmann") and plenty of old school Nazis who talked and still talk about nazi positions. We even have our own "Nazi grandma", the holocaust denier Ursula Haverbeck, aged 89, who was sentenced 10 month in prison but never went there.
> but this did not irradicate Nazi thought, it just pushed it underground.
I think it's reasonable to push Nazism underground rather than let it go mainstream and gain some steam. There are enough idiots who would think it legit and instead of eradicating it would grow.
But I agree with not sweeping things under the carpet, ignoring them won't solve the problem but keep them somewhat contained for a while.
The Trending Topic is so bad sometimes. The other day I saw that #davies was trending and the article presented by Twitter was some marriage news of some unknown couple. It was really trending because of a soccer phenom playing his last game in Vancouver before heading to the big leagues.
Man, Twitter seems to be eating a lot of crow lately in this domain. They came out swinging against regulation/"censorship" of their users... but this is getting out of hand. Twitter, Facebook, etc. need to start taking some responsibility here or someone is going to do it for them (i.e. the government).
There was controversy (still is?) about the Labour party refusing to adapt the IHRA definition of antisemitism because some interpretations think it means criticism of the state of Israel is always antisemitic. That's true in America as well when people don't fall in line with 100% support financially by the USA federal government for whatever Israel does.
Major left-wing activism leaders Linda Sarsour, Tamika Mallory, and Carmen Perez are proud, longstanding supporters of Louis Farrakhan and his hate-group "Nation Of Islam"
Not vouching for the linked article, but "fair and balanced" isn't a requirement to post here. Vox and Vice are both openly liberal publications that are regularly posted here. If you have specific disagreements, comment on them. Calling something a "nut-job shill publication" isn't productive.
I was making a joke about how it is a very conservative publication. "Fair and Balanced" was Fox News' motto for a long time.
Not that being conservative invalidates everything contained on the blog. But, you can find blogs who claim our government is controlled by shape-shifting lizard people on the internet, so a single article from a biased source isn't a strong argument and having an article titled: "The Democratic Party Is Working To Destroy The American Way Of Life" is pretty telling.
How about this: The source used is generally considered to warp the truth to suit its own narrative. Both Vox and Vice are widely considered fair, but tend to publish articles that would be of interest to there leftist audience. This source is not credible, more in line with Jacobin or Daily Kos.
The article itself mentions that the 2 examples it gives to state its premise were both "rightfully brushed off as raving lunatics."
Everyone agrees being an anti-semite is bad. The fact that people like Farrahkahn sit on the left doesn't mean all people on the left agree with those views. It doesn't mean they get a free pass. It's sorta how people on the right really hate it when you say "everyone who voted from Trump is a Nazi".
This seems to be very much mock outrage and false equivalencies.
Farrakahn hasn't been rejected. You can find photos of him with Obama, the Clintons, and various high-level DNC people. Farrakhan got to visit the Whitehouse during the Obama administration.
The anti-semites on the right have been rejected by the rest of the right. Perhaps we can consider David Duke to be on the right, but you won't find him at the Whitehouse. He has been solidly rejected.
here's David Harsanyi, the author of the article wikipedia page:
Harsanyi is a nationally syndicated columnist and senior editor at The Federalist. He is a former editor of Human Events and opinion columnist at The Denver Post.[1] His writings on politics and culture have appeared in The Wall Street Journal, Weekly Standard, Washington Post, National Review, Reason, Christian Science Monitor, Jerusalem Post, The Globe and Mail, The Hill, Sports Illustrated Online, and other publications.[2]
A libertarian,[3] his column is nationally syndicated by Creators Syndicate. He is author of The People Have Spoken (and They Are Wrong): The Case Against Democracy, Obama's Four Horsemen: The Disasters Unleashed by Obama's Reelection and Nanny State: How Food Fascists, Teetotaling Do-Gooders, Priggish Moralists, and other Boneheaded Bureaucrats are Turning America into a Nation of Children.[2] He left his position writing op-eds for The Denver Post to work for TheBlaze.[4]
Any platform that facilitates people saying what they want to say is going to facilitate hate speech, because some people are racists.
The platforms can try to prevent that, and Twitter does try. But it is simply not possible to catch everything unless every single tweet is reviewed by a human censor. That won't scale, but even if it did, the censors would not always be trustworthy.
Some people seem to be convinced that we can have large social platforms free of hate speech and fake news. It's not possible, because of the users.
It’s pretty clear that some people on the left would complain about abuse on Twitter unless abuse tools were 100% effective, which is not achievable.
And simply implementing such tools would draw the ire of people on the right, including the party that currently controls the presidency and the congress.
"Trending" just means that the topic is being referenced by the largest volume of accounts. Even if you can say with certainty that every one of your accounts represents an actual, legitimate person, you still have the problem that all "trending" does is surface content that's preferred by whichever of your users can mobilize the biggest angry mob. And if you can't say that with certainty (which bot-flooded Twitter absolutely cannot), "trending" becomes a wide open battlefield, trivially subject to exploitation by whoever has the largest budget for sock puppets.
"Trending" isn't an editorial strategy. It is the abdication of editorial strategy. It's being asked by Twitter among others to do much more than it is capable of doing, and their users suffer as a result.