
Show HN: Karaoke for Piano Overlaid and Synched with YouTube Videos - robbrown451
https://pianop.ly/
======
cseebach
I'm a terrible piano player, but this is still lots of fun. Particularly
notable for me here is the use of [https://github.com/g200kg/webaudio-
tinysynth](https://github.com/g200kg/webaudio-tinysynth) to generate the tones
- the WebAudio et al latency is now good enough in browsers for stuff like
this to work!

~~~
robbrown451
Thanks. Yeah I'm not so good at piano either but I'm getting better.

I spent a solid month trying to build decent sounding instruments, and it's
really hard and they didn't sound that great. Then noticed webaudio-tinysynth,
it saved the day! They sound surprisingly good. Although the actual piano
sounds aren't the best....piano is an incredibly complicated to
synthesize...sympathetic resonance and all that. I'm personally happy with all
the other ones though.

------
GistNoesis
Hi, I just did a show hn yesterday about pianorolls using tensorflow.js that
might interest you
[https://news.ycombinator.com/item?id=19128287](https://news.ycombinator.com/item?id=19128287)

~~~
robbrown451
Interesting. I have to admit I'm not sure how to actually use it. Like how is
it tutoring you?

Regardless, if you are doing sound analysis to try to pull out "pianorolls"
a.k.a. MIDI data, I'd be interested in talking. It's a very interesting
problem and could be very useful.

~~~
GistNoesis
Thanks for your interest.

It uses a microphone to listen to you playing, and try to reconstruct the
piano-rolls from the audio, using some sound analysis, so that there is no
need to have a MIDI instrument, and it can be applied to all instruments.

It'll also be more powerful than a MIDI instrument, as it can analyze higher
level features like tonality, or identify higher level pattern structure, or
musical sentiment analysis (so as a player you can objectively know if the
sentiment you are trying to give is transmitted successfully).

It should be working on all devices (though still with some random bugs on
apple IOS mobile devices), the piano-roll for the moment is quite dependent on
luck based on the microphone, but there is a video with audio to show what it
should look like when it's working.

The tutoring part is not plugged-in yet. It still highlight the musical
structure and patterns. It display music theory with colors, and can be of
great help for a teacher to pinpoint during an explanation of a concept.

We are showing that the audio processing pipeline works (though we are still
heavily limited by computing power), then it's just "model fiddling" (which
can even be automated), and grunt work like generating datasets to encode the
exercise we are trying to help learn.

For the tutoring part (not plugged-in), we will be able to choose different
neural networks which we can train for specific exercises, like rhythm
monitoring, pitch detection (for violin).

You will be able to interact with it : it plays a musical pattern (and
therefore you see it), and asks you to play it again, and then compare the
distance and give you points. It can asks you to transpose it.

We can add also add time tracking, and progress monitoring. We can try to
analyze the mistakes the student makes and suggest exercises that will help
them (as determined by data analysis).

With neural networks, we can also do alignment with a partition to check how
well the student has played it. Or add some other instruments to simulate
rehearsing a duo piano flute.

We can also do some duet, like has been done in
[https://experiments.withgoogle.com/ai-
duet](https://experiments.withgoogle.com/ai-duet)

We can also do some similar music search in a database.

------
pilothouse
Awesome job...pretty amazing when you consider what's involved in the graphic
overlays and timing synchronization!

~~~
robbrown451
Thanks! Yeah it took a good bit just to get a proof of concept working.
YouTube's API doesn't have very accurate time, so I did a bit of smoothing.
(you might notice they take a second or so to "lock in")

The "notes" are just divs with css transform and transitions....their position
gets updated every second with where they are supposed to be two seconds
later. Works surprisingly well. But yeah, lots of work. :) I'll be making
videos to show how to do all the recording and editing and stuff in the coming
weeks.

