Let's see how the TikTok Section 230 ruling will affect Twitter and Meta properties.
It basically said that if the content is presented in any other way than a strict chronological timeline, it's considered "editorial content" and the platform is responsible for it.
Ragebait content brings in engagement, engagement brings views, views bring money. This might eventually kill that unless Meta lawyers can argue that it doesn't apply to them, but does to TikTok.
>It basically said that if the content is presented in any other way than a strict chronological timeline, it's considered "editorial content" and the platform is responsible for it.
As soon as the volume of stuff exceeds the attention window of the average person, some kind of system is needed to prioritize things. Many sites rely on user-driven metrics to promote known-good content. So the criterion you have suggested here is completely impractical.
People should stop trying to impose their views on "ragebait" and be very restrained when it comes to adding regulations on any kind of speech. Anything that gets added will be twisted into something that can oppress anyone.
It's an interesting and severely non-trivial debate, and there's no way to 'prioritize'/'curate'/'feature'/'select' content (which controls what passively-browsing users are likely to see, unless they search or click to specific users/topics/groups) without in some way "imposing your] views".
For example, all other things being equal, should known-false or misleading claims be treated equally to unknown or factually correct ones? Should posts from an account with a history of posting those be treated equally to accounts that don't? And of course who gets to determine what constitutes 'accurate'? How do we prevent community voting being misused?
With every election, we are edging closer to legislating (in the US or EU) definition of social networks as a 'public good' with accompanying regulation requirement. Equally we've seen enough evidence of politically-slanted moderation or attempts to influence moderation in many countries.
(This stuff is non-trivial. What about accounts that are accurate about one set of topics (e.g. the Chinese economy) but sloppy on others and post outright garbage on a third? What about accounts that 'merely' retweet or boost content, vs post their own 'original' content? Should online journalists be held to a higher standard than average users? What about 'news personalities' like the Alex Joneses of the world?)
I'm very sure this debate will be revisited after the impending US election/ TikTok divestment/ Telegram controversy.
A privately owned social media platform exists for no other reason than to promote the political agenda and economic interests of its owner(s). So where you see racism, misogyny, and lies I see a racist misogynist who lies.
Honest answer, not something we’ve achieved yet. Some hints of it in NNTP, HTTP, RSS/Atom, Usenet… but until we decide that Internet connectivity is a fundamental human right and actually back that up I think we’ll still be a long way off from a digital commons.
Sure you could argue that any individual server is privately owned, but the collective protocol is about as public as something like social media can be
Eh, I mean generally they exist to sell ads, typically in a fairly amoral way (like, within limits, nothing that will annoy shareholders too much). Twitter's the exception, here.
Yes, but... Not often have horrible people been paid an incentive to trigger responses and emotions in others, at world-scale, and with complete immunity from the consequences.
We'll see how the Brazilian experiment goes but I suspect it will show somehow that the world's horrible people that find themselves in Brazil will have a little bit of a harder time
> Not often have horrible people been paid an incentive to trigger responses and emotions in others, at world-scale, and with complete immunity from the consequences.
It is worth pointing out that there have always been horrible and nasty behavior on Twitter, before any monetary incentives. To wit: they had a moderation team! Also, I don’t believe there’s rigorous research that shows the difference we’ve that monetary incentives drives this. My intuition, without data, is that the nasty and horrible behavior is driven more by things other than money:
Not often? The cable TV shows in the US and elsewhere are full of nonstop triggering content. The Brazilian court is being used as a tool to silence dissent there. When you make "hate speech" and "misinformation" crimes, everything that goes against the group in power will be shoehorned into one of those categories. The only speech that needs protection is by definition going to offend or inconvenience someone with the power to shut it down without those protections.
Exactly. It shouldn't even be a surprise: the internet already had examples of this long before Elon took over Twitter: just look at 4chan. If you don't have any moderation at all, pretty soon your forum turns into a cesspool as all the nastiest people flock to it (because they've been banned elsewhere), and then their nasty posts plus the lack of moderation drives out all the decent people.
You can even imagine this in real-life: imagine you had two nearby restaurants and you wanted to go out to eat. In your area, there's apparently a minority of very angry and ill-tempered people who like to argue with or scream at strangers and threaten them. At restaurant A, there's bouncers at the door who kick out anyone who causes trouble, but restaurant B believes in "absolute free speech" and won't do anything about these people when they disturb the other customers. Very soon, restaurant B is going to have a reputation as a place you don't want to go if you just want a nice meal without any drama.
Social media is more like a library or a movie theater. You can choose what you want to see for the most part. Maybe some people feel the need for more filters for themselves personally, to be excluded from conversations they don't like. This is how the early web worked... You had a choice of where to go and had to make up your own mind, and any encounter with unwanted sites would be easy to back out of. I think there is a place for a public square, and it's good to be aware of different opinions. But if you want what is essentially a parental filter for yourself, then go ahead.
It's not as black and white as that. The previous owners gave into pressure from the government to ban info about Hunter Biden's laptop. The previous owners had their own political bias which influenced what they censored.
It's private, but they can't claim it's just "dumb pipes" given all the curation they do through algorithms. So they are open to face consequences for whatever they choose to do. They shouldn't be able to push all the liability to users, as long as they activelly boost content.
The Guardian exclusively publishes negative stories about X.
When there is a problem, it’s rarely exclusive to X.
When there isn’t, the journalists go trawling for random low engagement tweets and pretend those anecdotes are evidence of a systemic problem.
It’s hard to see their actions as anything but a campaign of hit pieces.
I’m not sure if they are doing this to generate outrage bait, if the journalists have personal vendettas against Musk, or if it’s simply a matter of traditional media feeling threatened by social media.
It’s a good reminder that in the real world things are rarely exclusively good or exclusively bad, and if a media outlet portrays things that way, then that media outlet is trying to mislead you.
> or if it’s simply a matter of traditional media feeling threatened by social media.
The problem with that theory is that pre-Musk Twitter was loved by journo's across the globe .. it was the place to break news and get first hand reactions, it was a place where tick-marks were "earned" by being that actual person claimed and having a verified job at (say) Times of London | Some Energy Company | etc.
These days there are fewer and fewer working journo's, anyone with $$'s can buy a gold mark, the zazz of black twitter has retreated, etc.
It was never my thing but as an outside observer the shine has gone and journo's are bitter about X-itter as many (not all) miss the old version.
I'd be more inclined to run with a small hold out sub section hated "old Twitter" .. and probably hate new twitter.
There was a long standing distaste for sourcing news from twitter, which is understandable, but large chunks of old media were quite happy to use twitter to reach out to contacts (and DM behind the public eye), to guage reactions, and network.
There are (were) some pretty good network visualisation clouds that I saw in years past that showed how twitter really developed some deep across thhe globe communication connections.
A lot of that early fun and novelty has fallen away, not totally due to Musk, like MySpace, Facebook, Reddit, et al social networks become staid and passe.
... I mean, the Guardian has regularly publishes negative stories about Facebook, to the point that I remember people accusing it of having a vendetta against Facebook during the Cambridge Analytica thing. Also Youtube, and TikTok. There are your major social networks. I'm not aware of it having published negative stories about LinkedIn, presumably because LinkedIn is too boring to care about.
It basically said that if the content is presented in any other way than a strict chronological timeline, it's considered "editorial content" and the platform is responsible for it.
Ragebait content brings in engagement, engagement brings views, views bring money. This might eventually kill that unless Meta lawyers can argue that it doesn't apply to them, but does to TikTok.
reply