Thanks this is wonderful! I think my implementation is very close to the "riffology" described in the paper. Lots of other great ideas in this to pursue!
I wonder if the author has considered connecting it to more sophisticated models. Carykh, for example trained a lexical RNN to read transcribed MIDI, specifically training it to learn jazz.
Yes, I plan on adding something like this in the future! However, my goal is to do learning in real-time in the style of the current pianist. NNs need quite a lot of data to train correctly so it might be hard to do in realtime. Although there could be a neat hybrid (some kind of iterative NN...).
There's probably a reasonably easy way to do style transfer by running a few moments of play by the player and watching which activations light up and putting it through the generative model.
The best part of this whole post was the approach the OP took to actually design this. A methodical process of writing the design document, Prototyping with Python and then rewriting the application in Golang to take advantage of speed. Which begs my question to the Sw folks in this thread, why is Golang faster than Python?. I didn't realize that Golang compiler was available for ARMV7 devices such as RPi.
Can't you use the generative adversarial networks?
Where the discriminator IA would learn to identify good melodies from real data. After that, the generator would try to make a good melodie to be approve for the discriminator IA. Thus you could play the generator melodie .
Next step is to rig up some mechanisms to pull the keys down and we have a modern day pianola that can improvise. Fascinating. Should be not too difficult to play blues or other pentatonic melodies, since they lend themselves well to improvisation. Using tabs or music from blues legends such as Stevie Ray or BB King would likely yield some very interesting results.
I find it interesting that we start with teaching AI music the same way we oftentimes start with children, the pentatonic scale, because it's simple and more difficult to play the "wrong" notes.
The author may have been led down a blind alley when he (?) infers from a screenshot that Dan Tepfer is using Processing for this. You can also see that he's running SuperCollider (the icon on the right-hand side of the list of apps), a programming language oriented around procedural music composition and sound synthesis, which is to my mind a much more natural fit for this kind of work.
amazing work and great documentation OP. thank you.
consider emulations of your piano playing on two interconnected systems, each conceiving themselves as the AI and the other player as you.
initiate one of the AI using a masterpiece of yours, unknown to either. would that be considered putting your current state of pianoplayership into silicone?
http://peterlangston.com/Papers/amc.pdf