Hacker News new | past | comments | ask | show | jobs | submit login

> "The algorithm" is mostly humans clicking buttons.

Actually, the algorithm is not the button clicks, but the code that interprets those button clicks. I don't think many people notice this, but HN gives a bonus to longer posts, in terms of keeping them at the top. It's not just the upvotes that determine which votes surface to the top. There are probably other signals that are inputted into the algorithm, as well, with different weights attached to them. For example, I wouldn't be surprised if certain inflammatory words or phrases subtracted points from a post on HN.

For the forum I run, I auto-generate a list of the top posts for the week. It is based on the number of likes, but also on the length of the post. This is then filtered manually, removing inflammatory posts. HN, with its effective moderation, doesn't seem to have much of a different process.

1. You can design an algorithm that optimizes for engagement and attempts to surface non-inflammatory posts.

2. You can design another algorithm that actively penalizes inflammatory posts.

3. You can further add a human element (a moderator) to penalize or decrease the visibility of inflammatory posts.

These are things that actually happen in online communities. However, they also don't always happen to the degree that would be beneficial to society as a whole. Hence, the problem.

Like other industries (oil extraction), there are negative externalities that can and should be accounted for.




You work on a forum - it is by design limited to some area(s) of focus. Twitter is not like that.

I think forums are great. I think forums are better than Twitter, because they can have some focus. I think social media probably doesn't scale, because social groups will always surface conflict.

I think you can design those algorithms if you know what area(s) your community cares about. Twitter cares about everything.


The algorithms I've designed are in no way aware of the content of the post -- and they have worked as intended; reducing the amount of inflammatory posts that are surfaced. I don't see how, in this case, large social companies can't implement similar, content agnostic algorithms.

There are very simple signals of "quality" that are not based on the actual content beyond the length of the post. It's not that different than search algorithms, actually, which don't have the same problem with surfacing inflammatory posts.

Yes, those signal may be wrong in some contexts, but the signals that are currently being used are certainly wrong in many contexts right now, hence this discussion.


It's not just the content of the posts or discussion, it's the content of the people coming to your site.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: