- searching for music with natural language
- extracting hundreds of moods & emotions from any track
- finding tracks by tags
- get similar-sounding music
- and more.
The website is https://galiboo.com/, and the API docs are at apidocs.galiboo.com.
We've also got a live demo at demo.galiboo.com.
We also have a Python client library at https://github.com/galiboo/galiboo-python.
Our APIs are in beta, so we'd love to hear your thoughts & feedback! :)
How big is the selection of songs you're training on and can recommend from or search through (non-demo version)?
How diverse are the languages? Is it mainly English since it offers a text-search?
To have a non-demo API-key, how much do I have to commit? Is there a "pay-as-you-go" model where I can use this as a personal thing, or is the only target audience companies?
Edit: Found the answer to the first question for demo-users in docs "For users with demo API keys, we've loaded a fairly diverse, yet small, catalog of about 20K+ tracks into our backend system"
We're working on a pay-as-you-go model for developers & personal use, which we'll have live in the coming days.
Until then, you can use a demo API key that we're currently in the progress of emailing out to everyone that had requested one. Thanks! :)
Due to the immense feedback & unexpected surge in API key requests that we received from being featured on Hacker News, we've decided to open up access to our API platform to all developers! :D
So now, you can get your own API key at: https://galiboo.com
Our APIs are currently in early beta, so we'd love to hear any feedback that you might have! :)
Our API docs: apidocs.galiboo.com
Our Python library: https://github.com/galiboo/galiboo-python
You can also join our LIVE chat (with us & other developers) at https://gitter.im/galiboo/Lobby
If you have any questions, please shoot me an email to email@example.com and I'd love to help! :D
The main difference & significant improvement that our technology provides is its ability to not only extract high-level data (e.g. emotions, tags) from music (Echonest, for example, primarily emphasizes low-level attributes like beats, tempo, etc.), but also perform other important operations over this data (like querying by moods/tags, finding similar tracks, search by natural language, etc.).
So, in a way, our technology acts as an actionable intelligence layer over music, as opposed to just extracting & returning low-level data.
And yes, you could give it new songs & get back the extracted metadata. Plus, our commercial users can also integrate their own music catalog with our technology.
Thanks for the question! :)
Dying to try it on some lesser known songs when the traffic dies down :)
update: it's working
If somebody ever runs with this idea - just let me know afterwards how much have I missed on?