That the result is largely consonant appears to primarily due to the structure of the format: if you don't know that you can create accidentals off of a normal scale, play chords, create multiple competing melodies, etc., then you can't use them to create dissonance. This means that a pure random string largely amounts to hitting random white keys on a piano, which never really sounds grating (although it doesn't sound good either, just bland)... which also happens to describe the samples that have been selected for us!
Equivalences all the way down!
I think the main benefits of the abc midi syntax are that it is already a very dense encoding and has an established library of music to train on.
The source code is available here: https://github.com/jcraane/melodycomposition_genetic
Some sample melodies are in de docs/samples folder.
I may cost some time to get it to work again but should not be that hard.
The best part is Valve actually implemented one of them: http://i.imgur.com/Ydim1ui.png
So the training set is just text files containing songs? How does it test if the output is correct or not? If I understand correctly the goal here was just to produce outputs in the correct format. If one wanted to train for quality as well would one need to grade every output the network produces by hand?
Funny, this is DeepDrumpf bad mouthing Andrej Karpathy, the inventor of this algorithm (char-rnn):
So I don't think we should call him the inventor, though he definitely popularised it with his great writing and examples.
 Sutskever, Ilya, James Martens, and Geoffrey E. Hinton. "Generating text with recurrent neural networks." Proceedings of the 28th International Conference on Machine Learning (ICML-11). 2011.
Here's all his tweets: http://greptweet.com/u/dril/dril.txt
These things generate tunes which sound OK for a few seconds, but after tens of seconds, you realize there's no higher level structure at all. It's just random.
 - https://magenta.tensorflow.org/2016/07/15/lookback-rnn-atten...
 - https://www.youtube.com/watch?v=qFBQDfPyjoE
The bass line was generally quite simplistic, I wonder what happens if you codified gradus ad parnassum and taught the RNN counterpoint
In fact this kind of toy ML can never produce competent music. The best it can do is produce short workable snatches that sound like cut and paste snippets of the training source - before losing the plot in the next bar or two.
Training an RNN on a huge set of tunes and expecting it to produce examples of equivalent musicality is a fundamentally unworkable idea.
I'd suggest that anyone who doesn't see why this must be true doesn't understand ML well enough to know when it can and can't be used effectively.
It's worth asking in what other domains are trivial RNNs being misapplied to produce trivially poor models.
It's one thing to make bad music. It's another to - say - run a trading strategy, or make marketing decisions based on oversimplified ML models that produce misleading results because they're not sophisticated enough to recognise all the critical structures in the data set.
IMSLP (<http://imslp.org/wiki/Main_Page>) has about 110,000 works uploaded. However, these are mostly PDFs (some of them are scans), which would be very hard to extract useful data from.
Such data may include multiple voices, which makes it harder for a neural network to learn a pleasant-sounding song.