Hacker News new | past | comments | ask | show | jobs | submit login

> These algorithms purposefully amplify some of the most toxic content out there

No, these algorithms (often) purposefully amplify content that increases the engagement rate or "average time on page" metric.

It's just that toxic content coincidentally often has high values on this metric:

- if you agree with the content, you stay on the page longer to read more of what you love

- if you disagree with the (toxic) content, you perhaps invest a lot of time to post strong counterarguments (increases engagement rate and average time on page)




This distinction is needlessly pedantic. While social media companies might not have initially sought to promote toxic content, they have long since known they are effectively doing so and have done very little to stop it. In practice, "content that increases the engagement rate" so frequently means toxic content that it's not a useful distinction.


I'm not sure it is needlessly pedantic.

> In other words, maybe its undermining our current system by actually being more technically democratic.

I think OPs point is valid if the algorithm is really just giving humans more of what they want. In other words, if the engagement metric really is responding to how humans prefer to spend their time, this really is a view into what a more technically democratic society looks like.

Personally, I don't like the idea of that and would prefer the thought that these algorithms really are measuring the wrong thing!

To highlight the point I'm making, assuming the above is true, your stance could be summed up as: "yes, humans prefer this content, but we know better and will make the right decision on their behalf"

To re-enforce a sibling comment, being able to choose your own algorithm is exactly how you tease apart these two takes. Twitter supports this (chronological vs. engagement) - I personally opt for chronological. I'd be interested to see the stats on usage of that!


> I think OPs point is valid if the algorithm is really just giving humans more of what they want.

What makes people engaged and what people want are not the same thing by definition. If you want to state they are, you should say why you think it.


I think the burden is on you to explain why it's not the case. Nobody is being forced to read/share that content. People are choosing to do so.


You'd be venturing into the realm of psychology for that, but I would argue that short-term engagement != long-term desire. Think of gambling or gaming addictions- nobody's "forcing" those people to engage in behavior against their best interest, but many would argue that they're not freely choosing to succumb to addiction either.


> short-term engagement != long-term desire

I agree that it's not, but long-term desires are not all that people have.

> many would argue

Maybe, maybe not, but I would disagree.

> that they're not freely choosing to succumb to addiction either.

"freely" is doing the heavy lifting in that sentence. And a discussion about that is philosophical. They're as free to make that decision as they are to make any other.


> Nobody is being forced to read/share that content.

Those things are literally the result of human and automated research on how to make people read/share content. I wonder what is your definition of "forced".


Social media companies are not beaming information into your brain.

They aren't forcing you to keep your eyes open and see the content on their sites

Your computer can open another web page. You can get off the seat.

Tired of seeing garbage clickbait articles? Block the sites which post that stuff.

Tired of conspiracy theories and gossipy articles? Block the sites and mute/unfollow those people.

Tired of experiencing the world in 140 character chunks? Get off Twitter.


If you go to the bookstore and there are two shelves, one of Danielle Steel novels and another of Dan Brown, are you making a free choice to read trashy pulp fiction?


The point of a functioning market is that humans choose who they do business with. Both sides of the transaction are responsible for this - if you fill your shelves with trashy pulp fiction and there isn’t a market for that people stop going to your store.

Personally I try to go out of my way to purchase books directly from the author, publisher, a book store with a local presence, and online retailers in that order.

I’ve mostly cut Barnes and Nobel out of my vendor list for this exact reason, their book shelves are full of products I have no interest in - walking into their store has such a high noise to signal ratio that, unless I place my order in advance for a specific product, I don’t bother walking in anymore. The books of value are buried on shelves drown out by heavily marketed pop culture noise; and that’s if I’m lucky, often I have to have it ordered and shipped to store to pick it up locally. They don’t meet my needs for content discovery so I don’t do business with them very often.


Yes you are, you always have the option to read neither.


> they have long since known they are effectively doing so and have done very little to stop it

Just because they haven't succeeded does not mean they haven't tried.


What metric do you then propose to optimize for to stop promoting "toxic" content?


Don’t optimize, don’t recommend. Chronological timeline of only things you directly subscribed to. If you want to go down a rabbit hole, you should have to find it on your own.


My inner Libertarian is screaming, but I would support a "choose your algorithm" law. Wherein social media platforms would be required to allow the user a clear choice to opt-out of algorithmic recommendations. Perhaps even requiring a basic chronological feed.

I know the actual text of the bill would require a lot of finagling and legalese, but that's the concept of what I want.


You inner Libertarian has the same problem as every other person with that ideology - you are vastly overestimating the knowledge and capabilities of the average user, and the ideas that result will fail in practice. Algorithmic content is a market failure being subsidized by the fact that it helps sell ads. The capitalist greedy function means they're going to keep pushing algo content, because it's the money making engine. If you ban the combination of algorithmic content and targeted advertising, the damage would be vastly reduced (also Meta would implode, because their business is a market failure that only exists because regulators are not enforcing competition laws that prevent the use of free services subsided by ads and other antitrust laws).


To be clear I wasn't proposing a ban, but the requirement to provide non algorithmic alternatives for content feeds.


> but the requirement to provide non algorithmic alternatives for content feeds.

"Non algorithmic" means "some human creates the content feed by hand". This is also not free of lots of biases and what news portals in the web did in the past.


I think "sort by recent" or filtering are considered "non-algorithmic" in this space.


Yeah, I understood what you said, I just would be against it in general unless advertising is strictly decoupled from it. Even then, I'm not sure there is a huge benefit to it.

There used to be a news app called Pulse that LinkedIn bought and killed. To me, that was the best possible execution of the feed concept. It was essentially RSS with a scrollable feed component. You could browse feeds by category, but it wasn't recommending things based on current interests. If you wanted to go down a right/left wing newshole it was possible, but you had to consciously seek it out and go down it. The app wasn't breadcrumbing you.


except recommendation platforms are more popular all the time. ppl will choose the algo feed bc they like it more. idk why but everybody is taking america off the hook for wanting toxic shit and gravitating to it. its at least half a demand side problem.

if u want an example: tiktok has the for you page (algo discovery) and the following page (just ppl u follow). its the fyp where ppl spend time.


I can't respond to the child comment, so I'll reply here. The "type everything out" must be a generational/cultural thing. Because when I read your comment I literally did not realize you used words like "ppl". I automatically expanded it to people when reading it


If that's true, why is twitter making it so difficult (and more difficult over time) to see only content you follow, and Facebook making it impossible?


Please type out your words in full, using so much texting shorthand significantly detracts from what you are trying to say.


The answer to that question has nothing to do with my objection. I was simply pointing out the irrelevance of the "original intent" of these algorithms, given that we now know, and have known what they do. Focusing on that downplays their effects, which can indeed be considered their intent nowadays since we have known the effects for a while.

Say you hit a button, and each time you hit it you gain a million dollars. After you hit that button once or twice, you discover that it also causes great harm to other people. If you continue to hit the button, your original intent (when you didn't know it's effects) is no longer relevant to whether or not it you should continue hitting the button, since you would now knowingly be harming people for your own profit. The side-effect is no longer a side-effect. It's just an effect.

To answer your question anyway, perhaps we should consider the well-being of a user and bake that into our algorithms. It might not be as profitable, but there's a reason we regulate companies- having some restrictions on profitability can often be a net benefit to society.


> perhaps we should consider the well-being of a user and bake that into our algorithms. It might not be as profitable

The problem rather is: how are we supposed to algorithmically measure the well-being of a user?


You do not necessarily need a separate metric, you can filter out toxic content from other engaging content.


> you can filter out toxic content from other engaging content.

If this is your stance: what algorithm do you propose to separate content that is "toxic" from content that is "non-toxic"?


I'm not taking a stance, just pointing out that there does not need to be any single metric you need to optimise for.


Who gets to define what “toxic content” is? Is the Great Barrington Declaration toxic? How about studies that show masks might not work? How about public sourced data showing the IFR of Covid was nowhere near what “the experts” modeled early on?


The owners of the space can make those decisions, and the users can provide feedback, just like it happens now. There's no static status quo where everybody's happy, just like there's no static society.


So, the previous point still stands. These algorithms purposefully amplify content that increases engagement, which often happens to be toxic. Ergo, these algorithms often purposefully amplify toxic content.


At this very moment, the person trending highest across all social media is a sex trafficker hawking his self improvement remote mentorship thing to insecure young men. His basic tactic is to say blatantly misogynistic things to get eyeballs.

I saw youtube's new shorts feature and clicked on it out of curiousity. It gave me a couple funny cat videos and some cooking content copied from tik tok... but after that it was just a doomscroll of the guy I'm talking about above, Prager University, etc.

Youtube has like 2 decades of data on what videos I watch, and it's definitely not that shit, but yet it's clear their algorithm is optimizing to put that in front of my eyes.


Interestingly normal youtube recommendations continue to be pretty good, but the shorts feature is just a cesspool that's far worse than tik tok.


I’m interested that your experience with YouTube recommendations contrasts so drastically with my own!


I watch way too much youtube, to the point that they have lots of data on what I like


> At this very moment, the person trending highest across all social media is a sex trafficker hawking his self improvement remote mentorship thing to insecure young men

Who is this? And what platforms? I'm not seeing anything like this on fb/twtr/ig/tt/yt


His name is Andrew Tate. He’s been making videos for years but blew up recently. I think his primary platform is youtube


The algorythm doesn't care what its amplifying, only that the amplification increases engagement. Toxic is a correlation, not a causation.


If it's known to be a correlation, then it's disingenuous to say they're innocent for simply wanting to amplify engagement.


> If it's known to be a correlation, then it's disingenuous to say they're innocent for simply wanting to amplify engagement.

But if it's know to be just a correlation, it is a defamation to claim that this was done on purpose ("purposefully").


If you know exposure to radiation causes cancer, then you shouldn't let people continue to mine despite simply wanting more uranium or what have you for philanthropic purposes.


Purposefully is a red herring. Try knowingly.


You seem to be confusing correlation with causation. Correlation is effectively coincidence. You can't expect anyone to make decisions - business or otherwise - based on coincidence. In fact, we attemp to solve too many symptoms based on correlation. We can't keep doing that. It's creating more noise while true root problems continue and expand.


The irony is that these machine learning algorithms are effectively coincidence machines. There's no rhyme or reason as to why they work once you have even just a handful of neural net layers, and people invest quite a lot into these magic recommendation machines.

As for toxicity, I don't think there's one magical root cause for all of it. And I'd argue that most features and designs to increase engagement _are_ one of the many causes for people to create lucrative polarizing digital content.

edit: Also I'm not sure how my original post was confusing correlation with causation. You can replace "correlation" with "coincidence" there without issue.


They're tuned to increase engagement. They figure out based on inputs what does that. There's no morality, no value judgement, etc. They try to increase engagement.

Why toxic (which is subjective) appeals to so many humans is really the question here. The algorythms has no say in that.


A coincidence is a good starting point for discovering things. A natural next step would be to figure out how to dissect trained networks and turn them into proper models.


I hope we can figure it out! I'm a big fan of explainable AI initiatives. Though I have a feeling it won't be for another decade or so before anything huge is discovered, considering how long the technology has been around.


> Ergo, these algorithms often purposefully amplify toxic content.

"Purposefully" means that the algorithms were developed with the purpose to amplify toxic content. I seriously doubt this is the case. Amplifying toxic content is rather an unintended side effect of the metrics that the algorithms optimize for.


Is it your position then that since the developers had neutral intent, therefore the algorithms do not need to be adjusted?


> Is it your position then that since the developers had neutral intent, therefore the algorithms do not need to be adjusted?

No, my position is that since the developers had neutral intent, it is defamation to claim that these algorithms purposefully amplify toxic content.


I'm not sure how "defamation" is germane to the discussion, or the original point that you responded to.


The algorithm is not looking for toxic content to amplify. It's looking for content that would attain the most engagement of its users. If you consider the posts to be toxic, then maybe you are in a minority.


Everyone in this thread but you understands that the even if they are not optimizing for outrage, by optimizing for engagement you implicitly (and objectively) serve more negative/polarizing/divisive content.


This doesn't appear to be a response to any of the content of the comment it is posted in reply to. Or indeed the entire thread.


Yes it does. The amplified content is just content that attracts the most engagement. It is not a reflection on how "toxic" the content is. If someone considers the post to be toxic and does not engage with it, then that person is not the majority of users.


This line of argument is exactly why I think intention (mens rea) should be abolished from the legal system completely. Outcomes matter, intentions don't matter. If you don't want to put someone in danger, don't do the thing that may put them in danger. It's up to you to anticipate the consequences of your actions.

It really doesn't matter what the algorithm is meant to do because it's very obvious the material consequences of what it does.


I suggest that it might be worth reflecting on what ‘toxic’ means, because it is at least unclear to me, and the term seems to carry a lot of significance in this conversation!


> No, these algorithms (often) purposefully amplify content that increases the engagement rate or "average time on page" metric. It's just that toxic content coincidentally often has high values on this metric

Because people democratically choose to engage with it!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: