Hacker News new | past | comments | ask | show | jobs | submit login
Composing Music with LSTM Recurrent Networks – Blues Improvisation (idsia.ch)
41 points by helical on June 24, 2015 | hide | past | web | favorite | 12 comments



Author (http://research.google.com/pubs/author39086.html) of the paper here. I'm amused this is on Hacker News. The goal was to learn very long-timescale limit cycle behavior in a recurrent neural network. The chord changes are separated by many intervening melodic events (notes). As it turns out, even LSTM is pretty fragile when it comes to this. One problem is stability: if the network gets too perturbed, it can move into a space from which it never recovers. I'm not all that proud of the specific improvizations from that network, but I did enjoy learning what's possible and impossible in the space. I think now, with new ways to train larger networks on more data, it's time to revisit this challenge.

Edit: Formatting. I clearly don't post much on HN.


Hi Douglas, I just finished my college and quite interested in RNNs and fascinated by their capability and potential. Should I go to graduate school to study it or I can play with it as a hobby. Do you have any suggestions?


I think you could play around as a hobby. You might try Theano as a place to start (for LSTM: http://deeplearning.net/tutorial/lstm.html). If you become passionate about neural networks you might find yourself in grad school simply because that's a great place for diving in more deeply. It's really really helpful to know machine learning. Andrew Ng's Coursera is a great place to start: https://www.coursera.org/course/ml


Really fascinating. I wonder if distinguishing between a motif and a random set of notes would help provide structure here. So, the model would decide "I'm going to build a motif and save it for variation later" for 4 bars, then could decide to play randomness in the turn around. Next pass through it provides randomization to the pre-established motif?


I think there's something right about that idea. It seems, to me at least, that this idea of storing and re-using motifs with variation is at the heart of improvization. (Author of paper).


This is from 2002, right?


It is indeed - I'd suggest the title is modified to add year as it provides context for how long LSTMs have been established before their recent popularity boom.

https://scholar.google.com/scholar?hl=en&q=A+First+Look+at+M...


LSTMs came from the same group (Schmidhuber), from Hochreiter. They were introduced in 1997.

It's somewhat interesting to see that only recently they become really widely used in certain Deep Learning communities, e.g. speech recognition.


Unfortunately, I can no longer modify the title.


Interesting, but it feels like the music is forever stuck in an into of some kind. I never quite get the feeling that it's building towards something.


Think that may be by design:

> The goal of these experiments were to see if LSTM could learn a fixed chord structure while in parallel learning elements of a varying melody structure. It was easier to stick with a basic melody. Note that every 12-bar segment is unique; however, because only one or two bars are changed at a time, you may have to listen for a while to hear differences. We are currently working on a much more interesting set of training melodies and chords


"We are currently working on a much more interesting set of training melodies and chords". More like "my postdoc ended in Switzerland and I started a faculty job at University of Montreal (LISA lab) and never had time to get back to LSTM and music composition. Sigh.




Registration is open for Startup School 2019. Classes start July 22nd.

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: