

Composing Music with LSTM Recurrent Networks – Blues Improvisation - helical
http://people.idsia.ch/~juergen/blues/

======
douglaseck
Author
([http://research.google.com/pubs/author39086.html](http://research.google.com/pubs/author39086.html))
of the paper here. I'm amused this is on Hacker News. The goal was to learn
very long-timescale limit cycle behavior in a recurrent neural network. The
chord changes are separated by many intervening melodic events (notes). As it
turns out, even LSTM is pretty fragile when it comes to this. One problem is
stability: if the network gets too perturbed, it can move into a space from
which it never recovers. I'm not all that proud of the specific improvizations
from that network, but I did enjoy learning what's possible and impossible in
the space. I think now, with new ways to train larger networks on more data,
it's time to revisit this challenge.

Edit: Formatting. I clearly don't post much on HN.

~~~
cwhy
Hi Douglas, I just finished my college and quite interested in RNNs and
fascinated by their capability and potential. Should I go to graduate school
to study it or I can play with it as a hobby. Do you have any suggestions?

~~~
douglaseck
I think you could play around as a hobby. You might try Theano as a place to
start (for LSTM:
[http://deeplearning.net/tutorial/lstm.html](http://deeplearning.net/tutorial/lstm.html)).
If you become passionate about neural networks you might find yourself in grad
school simply because that's a great place for diving in more deeply. It's
really really helpful to know machine learning. Andrew Ng's Coursera is a
great place to start:
[https://www.coursera.org/course/ml](https://www.coursera.org/course/ml)

------
abannin
Really fascinating. I wonder if distinguishing between a motif and a random
set of notes would help provide structure here. So, the model would decide
"I'm going to build a motif and save it for variation later" for 4 bars, then
could decide to play randomness in the turn around. Next pass through it
provides randomization to the pre-established motif?

~~~
douglaseck
I think there's something right about that idea. It seems, to me at least,
that this idea of storing and re-using motifs with variation is at the heart
of improvization. (Author of paper).

------
albertzeyer
This is from 2002, right?

~~~
Smerity
It is indeed - I'd suggest the title is modified to add year as it provides
context for how long LSTMs have been established before their recent
popularity boom.

[https://scholar.google.com/scholar?hl=en&q=A+First+Look+at+M...](https://scholar.google.com/scholar?hl=en&q=A+First+Look+at+Music+Composition+using+LSTM+Recurrent+Neural+Networks&btnG=&as_sdt=1%2C5&as_sdtp=)

~~~
albertzeyer
LSTMs came from the same group (Schmidhuber), from Hochreiter. They were
introduced in 1997.

It's somewhat interesting to see that only recently they become really widely
used in certain Deep Learning communities, e.g. speech recognition.

------
SCHiM
Interesting, but it feels like the music is forever stuck in an into of some
kind. I never quite get the feeling that it's building towards something.

~~~
gwern
Think that may be by design:

> The goal of these experiments were to see if LSTM could learn a fixed chord
> structure while in parallel learning elements of a varying melody structure.
> It was easier to stick with a basic melody. Note that every 12-bar segment
> is unique; however, because only one or two bars are changed at a time, you
> may have to listen for a while to hear differences. We are currently working
> on a much more interesting set of training melodies and chords

~~~
douglaseck
"We are currently working on a much more interesting set of training melodies
and chords". More like "my postdoc ended in Switzerland and I started a
faculty job at University of Montreal (LISA lab) and never had time to get
back to LSTM and music composition. _Sigh_.

