> These algorithms purposefully amplify some of the most toxic content out there
No, these algorithms (often) purposefully amplify content that increases the engagement rate or "average time on page" metric.
It's just that toxic content coincidentally often has high values on this metric:
- if you agree with the content, you stay on the page longer to read more of what you love
- if you disagree with the (toxic) content, you perhaps invest a lot of time to post strong counterarguments (increases engagement rate and average time on page)
This distinction is needlessly pedantic. While social media companies might not have initially sought to promote toxic content, they have long since known they are effectively doing so and have done very little to stop it. In practice, "content that increases the engagement rate" so frequently means toxic content that it's not a useful distinction.
> In other words, maybe its undermining our current system by actually being more technically democratic.
I think OPs point is valid if the algorithm is really just giving humans more of what they want. In other words, if the engagement metric really is responding to how humans prefer to spend their time, this really is a view into what a more technically democratic society looks like.
Personally, I don't like the idea of that and would prefer the thought that these algorithms really are measuring the wrong thing!
To highlight the point I'm making, assuming the above is true, your stance could be summed up as: "yes, humans prefer this content, but we know better and will make the right decision on their behalf"
To re-enforce a sibling comment, being able to choose your own algorithm is exactly how you tease apart these two takes. Twitter supports this (chronological vs. engagement) - I personally opt for chronological. I'd be interested to see the stats on usage of that!
You'd be venturing into the realm of psychology for that, but I would argue that short-term engagement != long-term desire. Think of gambling or gaming addictions- nobody's "forcing" those people to engage in behavior against their best interest, but many would argue that they're not freely choosing to succumb to addiction either.
I agree that it's not, but long-term desires are not all that people have.
> many would argue
Maybe, maybe not, but I would disagree.
> that they're not freely choosing to succumb to addiction either.
"freely" is doing the heavy lifting in that sentence. And a discussion about that is philosophical. They're as free to make that decision as they are to make any other.
> Nobody is being forced to read/share that content.
Those things are literally the result of human and automated research on how to make people read/share content. I wonder what is your definition of "forced".
If you go to the bookstore and there are two shelves, one of Danielle Steel novels and another of Dan Brown, are you making a free choice to read trashy pulp fiction?
The point of a functioning market is that humans choose who they do business with. Both sides of the transaction are responsible for this - if you fill your shelves with trashy pulp fiction and there isn’t a market for that people stop going to your store.
Personally I try to go out of my way to purchase books directly from the author, publisher, a book store with a local presence, and online retailers in that order.
I’ve mostly cut Barnes and Nobel out of my vendor list for this exact reason, their book shelves are full of products I have no interest in - walking into their store has such a high noise to signal ratio that, unless I place my order in advance for a specific product, I don’t bother walking in anymore. The books of value are buried on shelves drown out by heavily marketed pop culture noise; and that’s if I’m lucky, often I have to have it ordered and shipped to store to pick it up locally. They don’t meet my needs for content discovery so I don’t do business with them very often.
Don’t optimize, don’t recommend. Chronological timeline of only things you directly subscribed to. If you want to go down a rabbit hole, you should have to find it on your own.
My inner Libertarian is screaming, but I would support a "choose your algorithm" law. Wherein social media platforms would be required to allow the user a clear choice to opt-out of algorithmic recommendations. Perhaps even requiring a basic chronological feed.
I know the actual text of the bill would require a lot of finagling and legalese, but that's the concept of what I want.
You inner Libertarian has the same problem as every other person with that ideology - you are vastly overestimating the knowledge and capabilities of the average user, and the ideas that result will fail in practice. Algorithmic content is a market failure being subsidized by the fact that it helps sell ads. The capitalist greedy function means they're going to keep pushing algo content, because it's the money making engine. If you ban the combination of algorithmic content and targeted advertising, the damage would be vastly reduced (also Meta would implode, because their business is a market failure that only exists because regulators are not enforcing competition laws that prevent the use of free services subsided by ads and other antitrust laws).
> but the requirement to provide non algorithmic alternatives for content feeds.
"Non algorithmic" means "some human creates the content feed by hand". This is also not free of lots of biases and what news portals in the web did in the past.
Yeah, I understood what you said, I just would be against it in general unless advertising is strictly decoupled from it. Even then, I'm not sure there is a huge benefit to it.
There used to be a news app called Pulse that LinkedIn bought and killed. To me, that was the best possible execution of the feed concept. It was essentially RSS with a scrollable feed component. You could browse feeds by category, but it wasn't recommending things based on current interests. If you wanted to go down a right/left wing newshole it was possible, but you had to consciously seek it out and go down it. The app wasn't breadcrumbing you.
except recommendation platforms are more popular all the time. ppl will choose the algo feed bc they like it more. idk why but everybody is taking america off the hook for wanting toxic shit and gravitating to it. its at least half a demand side problem.
if u want an example: tiktok has the for you page (algo discovery) and the following page (just ppl u follow). its the fyp where ppl spend time.
I can't respond to the child comment, so I'll reply here. The "type everything out" must be a generational/cultural thing. Because when I read your comment I literally did not realize you used words like "ppl". I automatically expanded it to people when reading it
If that's true, why is twitter making it so difficult (and more difficult over time) to see only content you follow, and Facebook making it impossible?
The answer to that question has nothing to do with my objection. I was simply pointing out the irrelevance of the "original intent" of these algorithms, given that we now know, and have known what they do. Focusing on that downplays their effects, which can indeed be considered their intent nowadays since we have known the effects for a while.
Say you hit a button, and each time you hit it you gain a million dollars. After you hit that button once or twice, you discover that it also causes great harm to other people. If you continue to hit the button, your original intent (when you didn't know it's effects) is no longer relevant to whether or not it you should continue hitting the button, since you would now knowingly be harming people for your own profit. The side-effect is no longer a side-effect. It's just an effect.
To answer your question anyway, perhaps we should consider the well-being of a user and bake that into our algorithms. It might not be as profitable, but there's a reason we regulate companies- having some restrictions on profitability can often be a net benefit to society.
Who gets to define what “toxic content” is? Is the Great Barrington Declaration toxic? How about studies that show masks might not work? How about public sourced data showing the IFR of Covid was nowhere near what “the experts” modeled early on?
The owners of the space can make those decisions, and the users can provide feedback, just like it happens now. There's no static status quo where everybody's happy, just like there's no static society.
So, the previous point still stands. These algorithms purposefully amplify content that increases engagement, which often happens to be toxic. Ergo, these algorithms often purposefully amplify toxic content.
At this very moment, the person trending highest across all social media is a sex trafficker hawking his self improvement remote mentorship thing to insecure young men. His basic tactic is to say blatantly misogynistic things to get eyeballs.
I saw youtube's new shorts feature and clicked on it out of curiousity. It gave me a couple funny cat videos and some cooking content copied from tik tok... but after that it was just a doomscroll of the guy I'm talking about above, Prager University, etc.
Youtube has like 2 decades of data on what videos I watch, and it's definitely not that shit, but yet it's clear their algorithm is optimizing to put that in front of my eyes.
> At this very moment, the person trending highest across all social media is a sex trafficker hawking his self improvement remote mentorship thing to insecure young men
Who is this? And what platforms? I'm not seeing anything like this on fb/twtr/ig/tt/yt
If you know exposure to radiation causes cancer, then you shouldn't let people continue to mine despite simply wanting more uranium or what have you for philanthropic purposes.
You seem to be confusing correlation with causation. Correlation is effectively coincidence. You can't expect anyone to make decisions - business or otherwise - based on coincidence. In fact, we attemp to solve too many symptoms based on correlation. We can't keep doing that. It's creating more noise while true root problems continue and expand.
The irony is that these machine learning algorithms are effectively coincidence machines. There's no rhyme or reason as to why they work once you have even just a handful of neural net layers, and people invest quite a lot into these magic recommendation machines.
As for toxicity, I don't think there's one magical root cause for all of it. And I'd argue that most features and designs to increase engagement _are_ one of the many causes for people to create lucrative polarizing digital content.
edit: Also I'm not sure how my original post was confusing correlation with causation. You can replace "correlation" with "coincidence" there without issue.
They're tuned to increase engagement. They figure out based on inputs what does that. There's no morality, no value judgement, etc. They try to increase engagement.
Why toxic (which is subjective) appeals to so many humans is really the question here. The algorythms has no say in that.
A coincidence is a good starting point for discovering things. A natural next step would be to figure out how to dissect trained networks and turn them into proper models.
I hope we can figure it out! I'm a big fan of explainable AI initiatives. Though I have a feeling it won't be for another decade or so before anything huge is discovered, considering how long the technology has been around.
> Ergo, these algorithms often purposefully amplify toxic content.
"Purposefully" means that the algorithms were developed with the purpose to amplify toxic content. I seriously doubt this is the case. Amplifying toxic content is rather an unintended side effect of the metrics that the algorithms optimize for.
The algorithm is not looking for toxic content to amplify. It's looking for content that would attain the most engagement of its users. If you consider the posts to be toxic, then maybe you are in a minority.
Everyone in this thread but you understands that the even if they are not optimizing for outrage, by optimizing for engagement you implicitly (and objectively) serve more negative/polarizing/divisive content.
Yes it does. The amplified content is just content that attracts the most engagement. It is not a reflection on how "toxic" the content is. If someone considers the post to be toxic and does not engage with it, then that person is not the majority of users.
This line of argument is exactly why I think intention (mens rea) should be abolished from the legal system completely. Outcomes matter, intentions don't matter. If you don't want to put someone in danger, don't do the thing that may put them in danger. It's up to you to anticipate the consequences of your actions.
It really doesn't matter what the algorithm is meant to do because it's very obvious the material consequences of what it does.
I suggest that it might be worth reflecting on what ‘toxic’ means, because it is at least unclear to me, and the term seems to carry a lot of significance in this conversation!
> No, these algorithms (often) purposefully amplify content that increases the engagement rate or "average time on page" metric. It's just that toxic content coincidentally often has high values on this metric
Because people democratically choose to engage with it!
No, these algorithms (often) purposefully amplify content that increases the engagement rate or "average time on page" metric.
It's just that toxic content coincidentally often has high values on this metric:
- if you agree with the content, you stay on the page longer to read more of what you love
- if you disagree with the (toxic) content, you perhaps invest a lot of time to post strong counterarguments (increases engagement rate and average time on page)