
The case for a relative popularity meter - whacked_new

======
whacked_new
Here are some thoughts I have regarding popularity meters on news sites and
such. Maybe some people would find it interesting or possibly useful. I would
also love to hear other opinions, so do share.

I am sure many readers here have seen the new SlashRating box, which sports
digg-esque (generally speaking) number ratings for each article. Maybe you
noticed that the Gmail Paper article
(<http://slashdot.org/articles/07/04/01/0631243.shtml)> and the X-Files
article right after it (<http://slashdot.org/articles/07/04/01/1231211.shtml)>
have substantially different SlashRatings (1000x+ difference). Humor aside,
they may both be valid ratings. Readers familiar with digg (easy example) have
seen ratings ranging from the single digits, to way through the thousands. I
believe there is a limit to the informativeness of the number rating. And
while many services have jumped onto using similar number-based displays of
popularity, here I make a case for the relative measure.

As a passer-by, casual reader on digg (which is most readers), when you see a
page full of ratings in the hundereds, you probably process only two things
from the score: the digit length and leftmost digit. When a 1000+ rating
appears, you immediately know from the digit length that it is a magnitude
more popular. More than what? More than the surrounding articles. Naturally,
your eyes will notice the highest ratings within the page. If you reach a page
full of articles rated in the thousands, readers probably apply the same
strategy: scan the digit length and the leftmost digit. Unless two or more
articles are rated on a higher order of magnitude than the rest, readers would
likely ignore the less significant digits -- and when such a comparison is
required, the articles are probably popular enough that you read them all.

After you mentally shift to a scale of comparison, you ignore details on other
orders of magnitude; this is, for example, how your eyes adjust to
consistently bright, or dark, environments. Of course numbers are easy to
output and fast to read, and details may be interesting and important to the
developers, but for browsing efficiency, most readers do not care, nor need to
know, if something is rated 4903, or 4692, points, as long as a reference
exists for comparison. And this reference will always depend on the number of
participating users at a given moment. A better system should thus display
only a relative measurement of popularity within a given timeframe. This idea
is not necessarily new, but I believe it is definitely a better approach to
popularity measurements, and moreso if you are expecting your project to
eventually support a large votership.

An existing example to illustrate good use of this is something like a
community download site. Say something is "popular" when over 50% of the users
download it. It makes more sense then, to just write "popular" when
total_downloads / total_users 0.5, rather than showing "1,040 downloads!", or
even "5,503,395 downloads!" (okay, millions is different... but if your users
were, like, ants, big deal). In the end, "popularity" is a relative measure.

I'm sure many folks here have thoughts on related issues; let's hear them :)

