Hacker News new | past | comments | ask | show | jobs | submit login

IIRC, a simple approximation of that horrendous formula is :

(positive)/(positive+negative+1)

It rewards items with more ratings. Basically, you initialize the number of negative ratings to 1 instead of 0.




Very interesting, seems your remembering looks correct to me.

x / (x+y+1) :: https://www.wolframalpha.com/input/?i=plot+x+%2F%28x+%2B+y+%...

horrendous formula :: https://www.wolframalpha.com/input/?i=plot+%28%28x%2F%28x%2B...

Much less prone to typos.


And the distance between the two, specifically formula - x/(x+y+1):

https://www.wolframalpha.com/input/?i=plot++-x+%2F%28x+%2B+y...


This is another way of moving towards a bayesian lower bound instead of a frequentist one. As your formula shows, in this case the bayesian formula is super duper simple.

(positive + constant1) / (positive + negative + constant1 + constant2)

For more details see the beta distribution.


This is a decent approximation. It handles all the common hazard cases:

+10/-0 should rank higher than +1/-0

+10/-5 should rank higher than +10/-7

+100/-3 should rank higher than +3/-0

+10/-1 should rank higher than +900/-200


This is like the "add one smoothing" the is commonly used for entropy / population diversity statistics.

(class1 + 1)/(class1 + class2 + 2).

(effectively, initialize all counts to 1).

https://en.wikipedia.org/wiki/Additive_smoothing https://en.wikipedia.org/wiki/Rule_of_succession


The main flaw in this formula is that when positive=0 the negative votes have no weight.


A heuristic I use when looking at products with low numbers of reviews is to add one positive and one negative review, so

(positive+1)/(positive+negative+2).

This basically makes the 'default' rating 50% or 3 stars or whatever, and votes move the rating from that default.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: