Secondly, it's interesting how they used the tech to stretch their existing material: they basically notated their existing corpus of work (82 songs) into MIDI, chopped it by part into loops, fed that into the MusicVae deep learning model, got out thousands of loops, and reintegrated the output material into a full album. This is probably the way things can and should go.
Exciting times! I want to have a go with this tech with my own music.
YACHT is fantastic and explores a lot of fascinating ideas both lyrically and musically (try “I thought the future would be cooler” for a catchy dance tune with a depressingly great dystopian theme, never has our impending doom sounded so good. For the less cynical, try “Shanghai-LA”).
- "Generative art and UK copyright law - good news", http://mcld.co.uk/blog/2011/generative-art-and-uk-copyright-...
- "Template License and Collaboration Agreements for AI Art" https://clinic.cyber.harvard.edu/2019/02/04/template-license...
If a melody generator produces a tune that's exactly like $famous_tune, is that plagiarism? What if it produces $famous_tune among thousands of other tunes? Does it make a difference if $famous_tune was in the training set, but there isn't any evidence of over-fitting?
What if you argue that the generative space includes $famous_tune, and therefore in some sense all output is influenced by it, making all output a related or derivative work - even if the space is astronomically huge?
And so on. I suspect this is going to keep lawyers very busy.