To me, while the project seemed interesting to work on, and most people would call the music beautiful, it doesn't really amount to what it's purported to represent.
Another (to me) important point: a lot of the compositions, with some notable exceptions, don't stray far from the pentatonic scale – or the individual elements are pentatonic in relation to themselves. The pentatonic scale is 5 notes that, to western audiences, always sound relatively pleasing in relation to the other 4. You could play something literally completely random in a pentatonic scale (perhaps with certain broad rhythmic restrictions), and western audiences would enjoy it.
If anything, I think this is just evidence that the process was actually carefully shaped at every step of the way by human tastes and intuition. The computer was truly more of a servant than a collaborator.
A good way to find new ideas for music that people will care about listening to is to go back to the pentatonic system, and find new variations to take from there.
Building representational models of that process is hard. So most people don't bother - they mush stuff up at random and keep the bits they like.
Unfortunately this music prof is ignoring 60-odd years of computer music history and reinventing the glitch-with-Max thing that was big 10-15 years ago.
Autechre, Amon Tobin, and many many others have been making music like this for a while now. The sound/sample world is different, but the techniques are very similar.
Lejaren Hiller showed that computers can compose competent music in 1957 with the Illiac Suite. Since then there have been countless other using computers doing things like the project in this article (Autechre is a PERFECT example).
I love that people are doing work like this and sharing it (full disclosure, this is a hobby of mine), but I wish it were presented in a less pretentious way (although admittedly this comment could probably also be a lot less pretentious).
Why no, American born electronic artist Brian Transeau "developed" the technique as recently as 2011.
If this guy created anything, it's the plugin with the same name that automates the process (and even that is debetable for 2011 -- similar plugin tools existed way before).
In essense: don't trust anything your read on the internet. This Wikipedia page is an example of the worst BS on Wikipedia, it should be taken down.
Now I wonder how that Wikipedia article survives...
tldr: artificial intelligence is much more artificial than intelligent.
nearly every project of this nature illustrates how much easier it is to write narrow AI than general AI, even though any AI which produced complete musical works would be a very narrow example of general AI.
it is much, much easier to write very specifically-targeted stuff that can perform a few useful tricks than it is to write an actual "composer."
this particular example features a Max/MSP patch designed to turn a very specific database of samples into a very specific style of ambient music.
every time I see anything about this topic on HN, I kind of end up being a bit of a killjoy about it, because the reality is that it is very, very common for people to overhype their results in this context. I've probably been guilty of this myself, in the past.
it's definitely possible to write very satisfying and effective music-generating code, but there is almost never an incredibly deep lesson to learn about the nature of consciousness there. there ARE very, very often incredibly deep and specific lessons to learn about the inner workings of particular musical forms.
I wrote a drum-and-bass drum pattern generator which made me much, much better at writing drum patterns by hand afterwards. one of the professors I studied with (briefly) wrote an amazing fuzzy logic counterpoint generator which allowed him to, effectively, improvise entire concertos, and I'm reasonably confident he learned an enormous amount about classical counterpoint in the process.
(note that "improvise entire concertos" means "play a MIDI wind controller, improvise a tune, and have his software generate appropriate harmonic accompaniment." the overhyping tendency bit me even as I was warning you about it.)
likewise, I think the author of this nautil.us post learned an enormous amount about what makes ambient music work well. and probably got much better at working in Max/MSP.
but this is not Skynet shit. this is not proof of the rise of machine consciousness. this is intricately technical work which develops your skill in programming and musical composition.
that's really all there is to it.
>there ARE very, very often incredibly deep and specific lessons to learn about the inner workings of particular musical forms.
That's always the problem. You can mechanize a style if you work hard at it, are skeptical about what you're doing so you're never satisfied with nearly-almost, and don't mind being ignored because no one ever listens to this stuff anyway. ;)
But it's impossible to create an original computer-generated musical language with non-trivial appeal without having a good model of human musical perception and emotional response.
Most music theory gives you a musical alphabet, and once you have one you can work out how other alphabets work.
But that's a long way from working out how to mechanize the invention of an original but expressive musical languages.
I think it's a fascinating problem.
My guess is it's going to stay a fascinating problem for a long time.
Do you have a link to your ebook?
and to a lesser extent http://singrobots.com/
the book's actually on sale right now, $23 vs $17. caveat: I've really got to redesign it, and I may just write another one because there's a lot more that could go in there.
She can write an infinite amount of new music all day for free. People can't tell the difference between her and human composers when put to a blind test.
Emily Howell fugue: https://www.youtube.com/watch?v=jLR-_c_uCwI
David Cope Emmy Vivaldi (composed by Emily): https://www.youtube.com/watch?v=2kuY3BrmTfQ
I studied with Dr. Cope here:
Emmy is not the same as Emily Howell; the Emmy Vivaldi was composed by a simpler program called EMI.
In either case, iirc, the music's composed by probabilistically combining key-signature-normalized snippets of existing compositions. EMI mostly just took the works of one composer and created a new work in that composer's style by Frankenstein-remixing snippets of the composer's actual works. Emily Howell, iirc, does the same, but uses multiple composers and/or original snippets by Dr. Cope.
btw: feed EMI Beethoven, and "she" produces Mozart. i.e., when probabilistically combining several key-signature-normalized Beethoven snippets, some of the results were identical to larger snippets of Mozart (who was, as you may have guessed, a big Beethoven fan).
also btw: Beethoven wrote algorithmic compositions for people to perform as a parlor game, with dice.
also also btw: my own drum-and-bass Ruby project from years ago will generate an infinite amount of new jungle riddims all day for free:
I think you may mean the other way around; Mozart war 15 years older than Beethoven, and died before Beethoven's career took off.
Here's a blast from the pre Go-lang world
Not if you created an algorithm that uses GIMP and runs independent of you. Copyright protects artistic works by natural persons.
Cope is the creator of the algorithm and has copyright protection of any artistic elements of that algorithm. Copyright does not protect technical effort no matter how skilled.
In the same way my camera's firmware writer, though skilled, has technical input in to all images created with that camera. But as they don't have artistic input in to any specific image they don't share the copyright - they may have made the image vastly superior with their technical ability (white-balance, focus, filters) but it was technical input and not "artistic".
Edit: a reference for the general principle under the USC, http://en.wikisource.org/wiki/Page:Compendium_of_US_Copyrigh....
the problem with computer generated music as in this example is that it is formless. The phrases are too expository and the example lacks any identifiable relationship to anything previously stated. If compared to written language, this example writes beautiful sentences but the paragraph makes no sense.
This limitation can surely be overcome - with AI developments and deep learning, algorithms would become capable of learning from human compositions, ultimately producing works indistinguishable from human ones (thereby passing a Turing test of sorts), and even better ones (just like chess engines routinely beat even genius human players now).
This will lead to a world where all casual music can be improvised by computers to user's liking (even adjusting parameters to accessible social context, such as feeding off your Facebook status, or how things are at work etc., again, mastering that skill by learning from your reactions, which are easily detectable - skipping to the next track, replaying bits that you liked), and then every played piece can really be one of a kind; like a kaleidoscope that produces unique pleasing images on demand.
The next step might be to generate compelling movie scenarios (I have a feeling that soap operas and genre movies would be the first to be automated like that...), and ultimately even movies themselves.
An interesting TED Talk (from December last year, so quite new) that I've watched recently: http://www.ted.com/talks/jeremy_howard_the_wonderful_and_ter...
The important shift would be to stop generating music, or other artistic content, out of a fixed set of preprogrammed principles and allow neural networks to derive the rules by themselves, even coming up with new genres. Instead of teaching computer to write music, you just let it teach itself.
here's the bandcamp for the music the computer wrote. I wonder does the computer get the 7 dollars and copyright and so on.
1) I probably wouldn't have bought a subscription, but now that the begging for me to buy a subscription is blocking the article itself, me not buying a subscription is guaranteed.
2) I guess I really don't feel like reading that article anyway. << closes tab >>
Absolutely unacceptable, and I'm getting sick and goddamn tired of it on every other website, especially here on Hacker News where the general population - of which "people who post things" is presumably a subset - really ought to be well-educated-enough regarding proper user experience to know better.
I was indeed aware of that.
> If you were aware of that, I don't see what's unacceptable about being shown an ad before reading a free (and interesting) article.
It's annoying and distracting. It's simply not good user experience, and the fact that such an ad manages to break through AdBlock Plus (probably because it's not coming form a third-party service, so it's harder to detect as an ad) is frustrating.
And to make this clear, I have no issue whatsoever with asking for a subscription; in fact, I might have been swayed positively (albeit admittedly slightly) instead of all the way to the negative should they have implemented a "Subscribe to us if you want to read more articles like this!" somewhere at the top or maybe right after the article itself or somewhere off to the side.
It's akin to smartphone apps (and there are a lot of them that do this, especially ad-supported games) that will step in between you and whatever you were hoping to do at seemingly random and display some fullscreen ad. I understand why they do it - they have to make money somehow, after all - but it's something that makes any somewhat-sane person not want to follow that ad or continue using that application (as opposed to the likely-intended purpose of driving users to pay for an ad-free version).