-As a human, you listen to the 4 and pick the one you think is helping the music get in the right direction. If the music isn't getting anywhere, you backtrack and click Add More.
-Use Right click if you want to delete some nodes and all theirs descendants. If you use right click on the nodes of the good path you wanted to keep. Poof! They are gone! All your hard selection work is gone :(
This is an interesting idea to help explore the music space, but as a human, you are contributing to the composition at a rhythm of two bits every 2.5s (and it took you 10s to listen and pick). That is not much input. Maybe you can close the loop with reading bio-metric cues as you watch your listener listen so that you can discover its preferences.
Sorry if it feels harsh but to me the whole point of Human-AI interaction is to make the human work less, and the whole point of human composition is to allow the human to express more. It's a miss here on both for me.
Here it doesn't feel like we are in these "20 questions" style scenario where we can make bits count, but rather trying to put the cat in the box by hitting the walls. In typical Human-AI interaction in reinforcement learning there are some promising research paths where you help the AI pick the right reward function. You make the human listen to two generated music extract and you try to predict the shape of the reward function, i.e. which one will the human like better. This way you can effectively make the bits count.
You're pretty much right with the intended usage - with a couple of minor notes.
- You can control-click on a node to quickly load more
- The genre and instruments can be changed whenever you like, but MuseNet will try to transition sensibly, so it might not be immediately obvious
Accidental deletion is a major problem right now, and is high on the to-fix list. Personally, I manage just using the save/load functionality, but it's not a great replacement.
I had hoped to get this more polished before sharing it, but other things came up and it got put to one side for now. Definitely open to pull requests if anyone wants to fix something, there's a long list of issues on the repo!
You're definitely right that it's not a very efficient use of the human. Another planned feature is the ability to directly edit or write the MIDI, allowing the user to say "I liked this option, but this one note was bad".
When writing MuseTree, I was imagining a user who enjoyed music, and wanted to try creating some music of their own, but couldn't just write something outright. With MuseTree, the only skill needed to write good music is the ability to recognize good music.