That being said, this is incredibly exciting to me, and I look forward to seeing how it progresses, and probably challenges my ideas of what music is
I recently had a discussion about this with a musician. I said that I didn't like when music was produced (and certainly lyrics written by) somebody else than the performer. I said it took away from the experience of 'getting to know' the person I was listening to.
She basically replied, that I was being extremely old fashioned, and that this 'idea' of music was very harmful for the business. She said it prevented people from working together and each contributing what they did best.
If she's right, I guess we just have to interpret the music on its own, and not see it as a mental state of some individual creator. Maybe this is related to, when authors are annoyed that people identify them with their main characters. In any case, non-individual art doesn't seem to be going away.
We can argue whether, for example, Adele is better at conveying emotion because she wrote the song. What we shouldn't do is claim that it's impossible for someone to do so if they didn't write the lyrics themselves.
Now, once you abstract that, does it matter whether the original writer was human as long as they produce something a singer can then connect with emotionally and then project? I don't know. I feel like ignorance would be bliss.
It's sad, but this is the reality of the music industry today. It also means it's highly unlikely that a modern version of the Beatles would ever exist or become successful in the music industry. I guess Coldplay is probably the closest we will come (they certainly write all of their own music).
It would be very interesting to create a secret fake person profile and publish AI generated music under that fake name and see if humans feel identified with the lyrics and music. See how far it goes, who knows, maybe create the next super star. Then to later reveal the secret.
For example, I often write pieces of music where I can pinpoint very specific sections where the harmonic, melodic or rhythmic choices (separately or in any combination) sound trite or boring/predictable. I'd love to be able to feed the entire song into a machine, point out those specific trouble spots to it, and have it generate alternatives for just those sections, perhaps on a spectrum from 'not-too-far-from-the-rest-of-what-you've-written' (harmonically/melodically/rhythmically) on one end to 'way-out-there' on the other.
Even if none of the output on its own was usable it would still have value in stimulating my imagination with ideas that otherwise wouldn't have occurred to me and that I could build on or refine.
I agree with the sentiment. But from a perceptual perspective, it's getting harder and harder to distinguish what is synthetic and what is organic. The trumpets, strings, and keys you hear on a song you like on the radio? It's very probable that they weren't played by humans: they might have been programmed and the sounds come from a massive, hyper sampled sound bank.
If we did an experiment, and recruited even expert musicians, and we asked them to classify sound clips in 'played/programmed' categories, I bet they wouldn't get them right.
David Cope's experiments showed that: https://www.theguardian.com/technology/2010/jul/11/david-cop...
Currently to put music in something you need to license it somehow, in most cases the generic music you hear in most media is sourced from a company that has a library of royalty free songs that they sell for a pretty hefty license fee.
Imagine being able to instead go into some software and punch in some keywords to describe the scene, the duration, track what happens in the scene on a timeline of some sort and let the computer render original music for the score. No middle men, no musicians, no royalties or licenses other than for the software. It would change the industry overnight.. and make being a professional musician even less promising of a career choice than it is today...
Sure, the algorithms, once sufficiently advanced, could probably trick us into thinking that certain examples of generative music were made by a person and then later reveal its algorithmic origin to prove that "the humans are stupid" and "the google algorithms are clever" but what are we actually proving here?
Can a computer devise new artistic forms that have some genuine impact on people - can a computer come up with Bacon's Triptych of George Dyer outside of regurgitating fragments of what it already has seen? What do we get out of a computer aping the alcohol-fuelled sweaty anarchic performances of The Black Lips?
The interesting stuff will be to see if this goes to other places that music has not yet gone - some new composition method - manipulation of frequency in ways that humans have not yet devised.
In a way, Magenta's job is not besting Bach. By the definition of Bach (a human being who changes the way we view and enjoy music), a non-human being cannot best Bach. Magenta's job is besting a much simpler, if equally challenging role - Max Martin, or the writers of "Let it Go".
As it turns out, this kind of music is already pretty formulaic. Much has been written on repetitive chord progressions being spammed across hundreds of famous singles. In a way, artists shouldn't fear the potential of these technologies besting them - they should thank them.
Freed now are artists from loading their albums with eye-rollingly generic lead singles that they immediately get sick of ("Stairway to Heaven", "Creep", "Smells Like Teen Spirit") because record labels know that's what will get the most radio play. You can just let the machine do those. Now, an artists' reputation is determined purely by his relative mettle against other human artists.
Pop is maybe 75% performance, sex, status, and charisma. The music isn't irrelevant, but it only really needs to be a committee-produced mashup of contemporary cliches to do its job.
The rest is posing and attitude.
>As it turns out, this kind of music is already pretty formulaic.
But it's less formulaic than it sounds. Discovering that it uses Standard Chord Sequence Number 7 (from the small standard pop set) won't get you close to an interesting song.
A lot of creative detail goes into the production, arrangement, and the vocal performance. Not the MIDI file.
Basically there are huge gaps between a MIDI cliche machine - buildable now, and not particularly difficult - to a full virtual artist who produces even moderately successful tracks without human help, to a musical AI genius who produces completely new musical styles that capture the human imagination for centuries.
You need a model of mind to do that last one, and we're at least 50 to 100 years away from that.
I think this is a grand oversimplification. Personality certainly _contributes_ to pop stardom, but the music is still #1. Before anyone knew who Taylor Swift was, they connected with her through one or more song.
> A lot of creative detail goes into the production, arrangement, and the vocal performance. Not the MIDI file.
Of course, but even having an autonomous "songwriter" that could write _a_ hit would be a gamechanger for music (though obviously most immediately applicable to top 40 / pop)
> You need a model of mind to do that last one
I disagree. Machines already produce what would otherwise be considered "experimental" music, you just need some deep reinforcement learning to know what has mass appeal.
Only if by 'connected with her' you mean heard her debut hit over and over and over again on radio until it became an earworm.
There are plenty of times and places where people want high quality "music" but don't want to actually engage with it on any level - the music that tells you you're still connected when you're waiting for a conference call, the low volume background music in some retail environments, the music in a lift. If "pleasant musical noise" could be generated automatically and to a sufficient quality I think there'd be a pretty decent market for it.
Eno worked on a number of projects to generate music years ago, with Bloom and similar: http://www.generativemusic.com/. A quick fiddle with that can definitely generate some banal hold/elevator music.
Being able to add to the song with commands like 'add a psytrance bass line', even within predefined parameters, to dynamically generate an entirely new bass line from other songs in the genre.
Maybe you could instantly add an improvised violin melody by telling it a style given that the chords/key from the human band are consistent.
Sentiment analysers could tweak the music based on crowd reaction towards musician defined goals and learn those pre-sets over time.
If music is a synchronisation layer between humans, maybe machine learning could help us to synchronize even more closely.
This has been done with LSTM. Impressively, the NN is generating waveforms and not MIDI notes - at around 3:50 it even attempts some singing.
One of the things you realize if you study jazz improvisation deeply, is that it isn't that random. Someone like Charlie Parker learned over 100 interesting riffs or patterns, then learned to play them in any key (transposing as necessary), and when improvising, transposes each into the chord the band is currently playing, and arranges them in a unique and interesting order.
Indeed, this is why many great Jazz musicians learn to improvise by transcribing solos of Charlie Parker, John Coltrane, etc, and learning their riffs in every possible key. Transposing is one of the best possible ways to learn to improvise, because it teaches you to listen and hear notes, as well as the patterns/riffs that everyone copies from each other.
There is even a great book called "Patterns for Jazz" that captures many of the most powerful riffs used by these musicians.
The really interesting thing about this is while most of the listening public assumes jazz is pure improvisation, much of it is copied riffs just rearranged in unique and interesting ways. I don't mean to detract from it; jazz is still a great musical style, but like all styles results from derivatives of previous works.
Mmmmm... that sounds like overfitting. That's not "attempting some singing", that's "playing back one of the things it trained on". Which really raises questions about the rest of what you hear, too; it seems like what is being produced is probably in some sense the "average" of the training data, rather than something able to generate new samples from it. But it's a very interesting "average" full of interesting information.
Since I wouldn't expect this to produce much else, I'm not being critical about the effort, just pointing it out so others understand what they are hearing. It was an interesting and worthy experiment that I wondered about myself.
But the masterpieces of music demand something altogether different. Beethoven's break with consistency by switching keys in the adagio of the 5th piano concerto. Wagner introducing the Tristan chord, Berg using C-major only as a joke in Lulu, when the word money is mentioned.
Add to that ingenuity the personal drama behind music. Bach's crisis of faith leading to 'What God does is well done' to Mahler losing so many of his children that he composed the Kindertotenlieder ("Songs on the Death of Children") to the origins of a simple pop song such as 'Tears in heaven' that moves people tremendously... Music is shaped by our biological life cycle, not by that of a computer program.
“I placed myself in the situation that a child of mine had died. When I really lost my daughter, I could not have written these songs any more.”
Der Tod ist ein Meister aus Deutschland.
Look at the examples you've provided. They gain additional meaning because of the context information you've given. When dealing with art, people like to wonder, what the author thought? How did they feel? People try to connect. With machines, they know there is no human to connect to on the other side, so the work will be considered inferior.
Personally I think AI is so far away from doing anything on a human-like scale that none of us will ever be alive (or our grandchildren) to be fooled by such advances.
I think he lost 1 child, not many. And it was a few years after finishing Kindertotenlieder.
EDIT: And then the inevitable schism among fans about what their base personalities should have been like, and the resulting clones, throwing the AIs into an existential crisis when confronted with their alternative versions..
Ultimately I don't think this is a very worthwhile because I personally believe the entire definition of art and music is a production that is filtered through the human experience. The same piece of music would mean more to me coming from a human than from an AI or program. If someone told me it was from a human and then later told me it actually came from an AI, it wouldn't really accomplish anything deep; it'd just make me feel tricked.
Isn't it OK to let ourselves be moved by its love letters to humanity?
Every Monday, there are hundreds of tweets from Spotify users who declare that the Discover Weekly algorithm understands them better than anyone, that they want to marry it etc. (This is a remarkable testament to neural network AIs. Not many people want to marry Netflix or Amazon's recommender system, to put it like that!)
Of course that only finds music for you, it doesn't create it from scratch. But given the immense size of the library it searches through, that's impressive enough that many really get the feeling that it knows you, understands you. I hope generative algorithms will be that good one day, and I won't waste time worrying that it's fake once they are.
I had a listen yesterday. It's okay. Nothing revolutionary. Competent though.
I think it's funny they added drums to make it a bit more listenable, because otherwise I think people would generally tune out after about 1/3 of the tune because it's okay but not really engaging.
T-Mobile holds the trademarks for T-anything, Magenta, the color Magenta, etc for not just communication and networking, but also music etc.
Can code generate a catchy pop tune? Im sure it can. Would I qualify it as music? Probably not.
Music can pull from sources that are EXTERNAL to the music: Heres a modern track that does just such a thing:
Music can have some very non traditional structure that most would qualify as "noise"
Hell music can even be "silent"
Ultimately, I think that it is possible for a machine to generate music, but were going to be talking about something that is more or less an AI at that point, after all music has a soul.
If instead you define art as the communication of things that can't be fully expressed by direct sterile language, that's something I can get behind. Going that route, I think that getting AI to create real art has some pretty clear challenges, but there are also ways whers I could see it being better than humans.
Depending on how the AI learns and analyzes it has a chance to have a unique perspective on human communication. From there, it can find new and innovative ways of identifying gaps between primal human experiences and sterile human communication - and with a unique understanding of human communication could come up with fascinating ways of bridging those gaps.
This is all spitballing, but I think AI could eventually be the main new frontier in art.
>If instead you define art as the communication of things that can't be fully expressed by direct sterile language
Reading this, the first thing I thought of was emotion, feeling! So much of that is wrapped up in who we are, in what we are, in our experiences, and in our limitations.
I like spitballing --- and the thought exercise follows:
Lets say tomorrow we have AI, real thinking machines, with the classic 3 laws of Asimov. Lets also assume we have a few of them. Because of what and who they are, is it possible that AI generates something that other AI's consider "art" but that we, as people, do not?
That being said, musicians have been creating machine-assisted composition (sequencer driven, stochastic, etc.) for a long time. I can imagine great art coming from AI, if the composer is there to help "guide the AI tool" in the direction he or she wants. AI as a means to an end, I don't see greatness from that... AI as an instrument, that could be very interesting...
And yes - musicians use accidents both digitally but even in the acoustic world all the time, that suddenly makes for interesting listening. Music has never been 100% human.
And the last point is if you want computers to be able to fabricate the stories themselves, and maybe create output that tells or "exploits" such a story, then yes maybe it isn't a human story, or maybe they can emulate our story, but in the end we come down to what is consciousness, what is intelligence, and all of this. But if intelligence can be artificial similar to humans, then AI's can in theory create vast and amazingly intricate stories that would be even more fascinating than what a human could come up with.
I'm listening to Boards of Canada and Aphex Twin right now. What about those can't be analyzed and recreated ? There's not complex bowing technique or lyrics to speak of, but it's still relatable, and some I'm sure would say soulful. BoC for certain (by their tone) evoke "nostalgia". Does that qualify as soulful ? Does it seem out of reach of algorithms ?
I think "soul" will have to be better qualified unless its just going to be tautological.
But a machine does not yet have the capacity to lay its finger on the cultural pulse and create something new and slightly different that strikes a resonating cord, pulls people in a new direction, makes them grow, because in addition to having a certain algorithmic description, the music they are making also betrays a certain darkness, a blend of fear, grotesque and dissonance that is there, somewhere down inside us, and that we love despite its ugliness. Until a machine can discover this part of humanity and translate this roiling mass into music, it hasn't managed the same feat as Aphex Twin.
This is what makes music great, that it is born from a place that requires loving observation of humanity. Real creativity requires having insights about humans, representing them in art, and exposing your insight through a particular medium. The image itself, the sound, means nothing. It's the reception, the speaker, the meaning, that matters.
A machine has no place in society. It isn't "straight out of Compton". It CANNOT make music.
James/Aphex said back in 1997 that he was using algocomp for some of his music. :)
This will also help attract young researchers to Tensorflow, so a low cost win for them.
Meaning in music can come from the creator or the listener.