Hacker News new | past | comments | ask | show | jobs | submit login

A site to discover content using an algorithm that ignores metrics we have begun to game (likes and viewings) and instead builds rankings based on pairwise comparisons.

Would allow discovery of new good content that hasn't employed growth hacks and will also differentiate between equally rated content.

https://aeolipyle.co (algorithm complete -- need to find good use for it.)

Many recommendation services rank contents based on its actual content relevance (not just on likes/retweets). For instance, Prismatic (http://getprismatic.com) used clustering to group contents of similar topics, which does pairwise comparison between documents (as you mentioned).

I'm curious: pairwise comparisons of what? It's not clear to me what your algorithm does.

After you view content (ideally immutable content like a film, music, books or computer games) -- when you offer to rate you are instead promoted on how it faired vs a previous experience. "You've just seen Terminator 2 -- how is it compared to The Matrix?)

In terms of algorithm it's got around issues of merging incomplete Condorcet elections (as not everyone will compare or rank every item) and clustering.

Turning these partial elections into a single order ranking.

If I understand correctly, this sounds like an idea that I've been thinking about for a little while.

Essentially, you do binary search-insertion into a list, where the comparison function is a prompt to the user asking "Is A better than B?" (If it's too difficult to judge "betterness" between two items, you could just as easily swap in a different comparison. "Is A funnier than B?")

One thing that people always ask when I mention this is: "What if A and B are equal?" Well, then you answer no, because A is not better than B. If your answers are consistent, then A and B will end up next to each other in the list.

yup -- I believe this is a better user experience in terms of capturing ratings (by capitalising on the availability heuristic)

What you're describing technically would work when each person compares every item (and would fall into the domain of condorcet methods).

However in practice the election becomes a graph (rather than list or x/y table) with cyclical dependencies and conflicting comparisons -- it becomes quite hard to resolve -- but it can be.

I envisioned starting with an empty list, and populating it with the user's comparisons as they come in. That way, you don't have to deal with unrated items.

Cyclical/conflicting comparisons are a function of faulty users, the algorithm can't take the blame for that! ;)

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact