Hacker News new | past | comments | ask | show | jobs | submit login

I disagree. Your points aren't necessarily wrong, but they ignore one big factor. Twitter chooses what content to promote to people.

I could use Twitter quite happily not knowing about the latest "scandal" in, say, the knitting world. But Twitter actively promotes that content to me - either with the "trending" sidebar or by showing me content that it thinks will increase my engagement.

That is a technical problem. How do you surface engaging content without also surfacing harmful / polarising / abusive content?

If a specific Tweet got a million likes, a "neutral" algorithm might choose to promote it. But unless that algorithm knows that the Tweet is deliberately inflammatory, it can't choose to de-prioritise it.

So, yes, there is a problem with human nature. But it is being exacerbated by deliberate technical and policy choices.




I agree with you, but I'd say this is not a "technical" problem because it's not the engineers who screwed up--they correctly implemented the algorithms they were asked to create.

The problem is with the business model. If you only make money off ads, then you need these dark pattern algorithms to survive as a company.

Taking Twitter private gives the company space to come up with a better business model. But there is no guarantee that they'll find one.

But I certainly hope they do: The Fourth Estate used to have a diverse revenue source (ads, classifieds, subscriptions, etc.) but software ate their business model and now all they have left is ads. All those dark patterns are also affecting mainstream media, and unless we come up with a better model, we'll be stuck with clickbait and deliberately inflammatory content.


This would make sense if twitter actually made profit or at least grew its "capital" (the users). But what they actually did is increase the work force and lose active users.

This is a strong evidence to me that the algorithms aren't actually driven by raw capitalistic incentives but are at least partially a tool to manipulate public discourse.


> How do you surface engaging content without also surfacing harmful / polarising / abusive content?

But the most engaging content is also the most harmful / polarizing / abusive!

"The algorithm" is mostly humans clicking buttons.


> "The algorithm" is mostly humans clicking buttons.

Actually, the algorithm is not the button clicks, but the code that interprets those button clicks. I don't think many people notice this, but HN gives a bonus to longer posts, in terms of keeping them at the top. It's not just the upvotes that determine which votes surface to the top. There are probably other signals that are inputted into the algorithm, as well, with different weights attached to them. For example, I wouldn't be surprised if certain inflammatory words or phrases subtracted points from a post on HN.

For the forum I run, I auto-generate a list of the top posts for the week. It is based on the number of likes, but also on the length of the post. This is then filtered manually, removing inflammatory posts. HN, with its effective moderation, doesn't seem to have much of a different process.

1. You can design an algorithm that optimizes for engagement and attempts to surface non-inflammatory posts.

2. You can design another algorithm that actively penalizes inflammatory posts.

3. You can further add a human element (a moderator) to penalize or decrease the visibility of inflammatory posts.

These are things that actually happen in online communities. However, they also don't always happen to the degree that would be beneficial to society as a whole. Hence, the problem.

Like other industries (oil extraction), there are negative externalities that can and should be accounted for.


You work on a forum - it is by design limited to some area(s) of focus. Twitter is not like that.

I think forums are great. I think forums are better than Twitter, because they can have some focus. I think social media probably doesn't scale, because social groups will always surface conflict.

I think you can design those algorithms if you know what area(s) your community cares about. Twitter cares about everything.


The algorithms I've designed are in no way aware of the content of the post -- and they have worked as intended; reducing the amount of inflammatory posts that are surfaced. I don't see how, in this case, large social companies can't implement similar, content agnostic algorithms.

There are very simple signals of "quality" that are not based on the actual content beyond the length of the post. It's not that different than search algorithms, actually, which don't have the same problem with surfacing inflammatory posts.

Yes, those signal may be wrong in some contexts, but the signals that are currently being used are certainly wrong in many contexts right now, hence this discussion.


It's not just the content of the posts or discussion, it's the content of the people coming to your site.


I guess a solution is for the algorithm to stop showing me tweets because other people like them. Ie. allowing me to tick a box so that my recommendations are based only on my own engagement, and not that of the average Twitterer.


I blocked the trending sidebar ages ago and my twitter experience is utterly non-toxic. I don't think I ever see anything I wouldn't want/expect to see. The only time it comes close is when twitter switches my default view from 'Latest' back to 'Home' - it always dawns on me when that happens because stuff gets 'weird', but I don't think it ever goes as far as 'hostile'.


> Twitter chooses what content to promote to people.

Perhaps with default settings and with no client-side filters. But you, the user - especially a technical user - get to decide what you see.

> How do you surface engaging content without also surfacing harmful / polarizing / abusive content?

How do you as the end user? You choose who you follow, and user filter extensions. What Twitter chooses is of no relevance.


> the latest "scandal" in, say, the knitting world

Figuring that was likely an exaggeration, but realizing that you just never really know in 2022, I decided to google “knitting scandal” and low and behold the human race never fails to disappoint in its capacity to create chaos among its various communities.


>promotes that content to me - either with the "trending" sidebar

Pro tip - you can use "block element" in uBlock to be rid of that stuff. I was thinking a bit what everyone's problem is but then I remembered I'd done that ages ago and never looked back.


Agreed. What's missing from many of these conversations is Twitter has been a publicly held company with a legal duty to increase value for its shareholders for a decade.

If it were only held to its own morals then this would be much easier to solve (albeit still VERY hard). But it must make decisions to make $ and grow. It found its local maxima by optimizing for ad revenue, coming from eyeballs on screen by incentivizing and distributing emotionally engaging content - e.g. exciting "dunks" and political drama.

The side effect is an increasingly polarized populous. Seeing drama encourages more drama.

Now that Elon has taken Twitter out of the capitalist rat-race, there's an opportunity for them to dislodge from this local maxima and find a new one - perhaps one that doesn't subsist on human drama. The question becomes, how do you find that new local maxima? What does it look like? How does it make money?

Elon is an ego-driven wild boar in my eyes, but he's given Twitter an unprecedented couple years to reinvent itself outside of fiduciary shareholder duties. Now, what will he do with it? What would we want him to do with it?


I think the world keeps overestimating the impact of Twitter. When you are on it it feels like everyone is there, but they only have 77M users in the US and 238M worldwide. That's a tiny minority. The population at large is nowhere near as polarized as Twitter makes it seem, and very few people are actually impacted by this biased perspective.


> How do you surface engaging content without also surfacing harmful / polarising / abusive content?

People's objections to twitter are rarely if ever that they have objectionable material forced upon them. The objections are that objectionable material is "surfaced" to anyone, especially the people who desire that material the most.

And the definition of that objectionable material tends to correspond to the beliefs common among people who voted for Trump, who won one election and barely lost the next i.e. the objectionable material that should be suppressed is usually the opinion of near-majorities, and virtually always very large minorities of the population. Speaking as a black person - if all black people believed the exact same thing about an issue (but nobody else did), it would mean that 12% of the US population believed it. There are no ideas currently being censored that aren't honestly believed by more than 12% of the population, so a media that can't represent them is worthless for us.

It's not a technical problem to write a politically neutral political censorship algorithm, it's a logical problem. It's like trying to build a coffee machine that doesn't use coffee. It's also a fictional problem that only external defenders of political censorship think exists: the insiders at our major social media outlets are happy to solve the problem of the administration being upset with what is said on their platforms by meeting with them weekly, reporting on what's been accomplished since last week, and getting a new list of orders for next week. There's your "neutral" algorithm.


> especially the people who desire that material the most.

Just because people want to hear the blood libel about Jews doesn't mean it has to be allowed.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: