
Show HN: Bliss, a library to generate smart playlists from your music - Biganon
https://github.com/Polochon-street/bliss
======
git-pull
I always wondered how stuff like this was done. I use spotify often so often
discover new music via them aggregating what I already like. But I think
spotify does it via finding what other people also like, and maybe have a
human element internally where they have audiophiles tag it internally.

So this library does it via analyzing it. _Somehow_. It seems math is
involved, but even with the docs, I still don't get it. It'd be helpful to
explain the basics of what tempo, amplitude, frequency, attack is. Maybe a
video would help.

I also see someone wrote an MPD plugin for it
([https://github.com/Phyks/Blissify](https://github.com/Phyks/Blissify)).
That's worth checking out.

Most people don't understand how audio and programming works. In the same way
people may not specialize in geo + programming (but they use Google Maps) or
color theory + programming (Yet they use color scheme generators and
Photoshop). So there is domain knowledge that could be conveyed.

~~~
Polochon_street
As you said, as far as I know, Spotify does its playlists mainly using
machine-learning, audio files' tags and user ratings, and not that much via
analyzing the actual content of songs.

As it's an open-source project that is supposed to work even without internet
access and we don't want to have huge databases of user recommandations, we
chose to go for the audio analysis part.

I'll enhance the documentation (as it may be a bit scarce right now) about the
analysis process, but Bliss extracts features that are supposed to be disjoint
to compute the coordinates: Tempo is how much « quick » the track is, attack,
how much there is abrupt changes in the music, amplitude, how « loud » the
song is overall, and the frequency analysis checks whether the track is
globally high-pitched or deep.

