I spent a solid month trying to build decent sounding instruments, and it's really hard and they didn't sound that great. Then noticed webaudio-tinysynth, it saved the day! They sound surprisingly good. Although the actual piano sounds aren't the best....piano is an incredibly complicated to synthesize...sympathetic resonance and all that. I'm personally happy with all the other ones though.
Regardless, if you are doing sound analysis to try to pull out "pianorolls" a.k.a. MIDI data, I'd be interested in talking. It's a very interesting problem and could be very useful.
It uses a microphone to listen to you playing, and try to reconstruct the piano-rolls from the audio, using some sound analysis, so that there is no need to have a MIDI instrument, and it can be applied to all instruments.
It'll also be more powerful than a MIDI instrument, as it can analyze higher level features like tonality, or identify higher level pattern structure, or musical sentiment analysis (so as a player you can objectively know if the sentiment you are trying to give is transmitted successfully).
It should be working on all devices (though still with some random bugs on apple IOS mobile devices), the piano-roll for the moment is quite dependent on luck based on the microphone, but there is a video with audio to show what it should look like when it's working.
The tutoring part is not plugged-in yet. It still highlight the musical structure and patterns. It display music theory with colors, and can be of great help for a teacher to pinpoint during an explanation of a concept.
We are showing that the audio processing pipeline works (though we are still heavily limited by computing power), then it's just "model fiddling" (which can even be automated), and grunt work like generating datasets to encode the exercise we are trying to help learn.
For the tutoring part (not plugged-in), we will be able to choose different neural networks which we can train for specific exercises, like rhythm monitoring, pitch detection (for violin).
You will be able to interact with it : it plays a musical pattern (and therefore you see it), and asks you to play it again, and then compare the distance and give you points. It can asks you to transpose it.
We can add also add time tracking, and progress monitoring. We can try to analyze the mistakes the student makes and suggest exercises that will help them (as determined by data analysis).
With neural networks, we can also do alignment with a partition to check how well the student has played it. Or add some other instruments to simulate rehearsing a duo piano flute.
We can also do some duet, like has been done in https://experiments.withgoogle.com/ai-duet
We can also do some similar music search in a database.
The "notes" are just divs with css transform and transitions....their position gets updated every second with where they are supposed to be two seconds later. Works surprisingly well. But yeah, lots of work. :) I'll be making videos to show how to do all the recording and editing and stuff in the coming weeks.