that's a great question - those approaches where based on acoustic similarity. Later approaches, including Echo Nest, included social features.
An example I used to use with Brian was that Bad Religion and NOFX sound similar. However, fans of one of those bands weren't often fans of the other. In this model, people have a need for an amount of that sound and a social connection, NOT a need for more and more and more of that sound.
Another diff is that Pandora used human music experts where EN used machine listening. They have some overlap, and some differences, but obv the machine listening is offline.
Didn't the Music Genome Project / Pandora do this half a decade before Spotify?