

Making music automatically with markov chains and genetic algorithms - vikp
http://vikparuchuri.com/blog/making-instrumental-music-from-scratch/

======
Kronopath
Steve Engels at the University of Toronto has done some work on exactly this.
You can read a bit about the work and listen to some samples here:
[http://www.magazine.utoronto.ca/leading-edge/computer-
music-...](http://www.magazine.utoronto.ca/leading-edge/computer-music-
composition-steve-engels-daniel-eisner/)

It used similar techniques, using a note-by-note Markov Chain on MIDI to
generate music similar to an initial piece of training data. The difference
with his model is that it's only trained on a single piece at a time. This
leads to significantly more coherent music, but at the cost of making it
effectively a variation on the original piece.

The biggest challenge in this kind of work is trying to get an overall
structure for the entire song. In talks at the university, Engels has
described the output of his model as that of a "distracted jazz pianist"—the
moment-to-moment melodies are coherent but the song lacks overall form and
direction.

~~~
vikp
Thanks for the link, interesting stuff.

It may actually be easier to model "coherence" by using natural language
processing on the midi file pre-render. It is very hard to get coherence
features that work across an entire piece. Definitely worth exploring (and I
need to learn more music theory).

