Hacker News new | comments | show | ask | jobs | submit login
What makes music sound “good?” (princeton.edu)
186 points by grimgrin 274 days ago | hide | past | web | 71 comments | favorite



I'm a fan of Juergen Schmidhuber's theory: http://people.idsia.ch/~juergen/creativity.html

The idea is that the brain finds it pleasing to learn things. It effectively seeks novelty. Repetitive, predictable music does not sound pleasing. Pure randomness also does not sound pleasing. Somewhere in between is novelty. Patterns that are definitely real, but new. That somehow violate your brains expectations.


I think these musical patterns discovered by the brain are multi-dimensional. The music is just a serialization of this multi-dimensional object, possessing various kinds of symmetries at different levels. It's as if a complex model was presented piece by piece, allowing the receiver to reassemble it. When the multi-dimensional structure was recovered, there is a feeling of "musical" pleasure. Every note is integrated with the other notes, has it's place in this structure its significance to us emerges by its relation to the whole structure.

The ability to enjoy music is related to how good the high-dimensional reconstruction is. A listener who has no experience with a genre might not perceive subtle symmetries and higher order patterns, thus, it's just some kind of noise. The more she listens, the more her musical "vocabulary" and ability to perceive these symmetries increase. Developing taste for it is developing ability to represent it fully as it is, a form of integrated information.


"The more she listens" part is likely why we can listen to an album once and dislike it and then on a second, third and fourth listen develop an increasing appreciation for it until it potentially becomes a favorite album. I always found that phenomenon strange, but your comment is an interesting theory for why that happens.


This is one of the better descriptions I've read regarding the appreciation of music.

Thank you.


Anecdotal but I feels as if that goes against standard chord progression, keys, and almost all EDM.

Blues has a fairly strict formula in which most songs follow. Most songs are in 4/4, and most modern music sounds fairly similar yet people are really into it.


Aren't those patterns and structures just there to avoid the cognitive overload, while we're entertained by lesser variations?

(BTW, I believe we're overly simplifying by speaking of music as a single entity. All of the elements you mentioned are a foundation in more popular music, but good luck finding them in more modern or experimental genres.)

The cognitive effort to digest Schoenberg is different from that for a pop song. Still, you can progressively familiarize yourself with a genre, and relax on pieces that seemed hard and inaccessible earlier.


> Still, you can progressively familiarize yourself with a genre, and relax on pieces that seemed hard and inaccessible earlier.

Yes, as a passionate music collector and someone that can get lost in weird, obscure and very leftfield music, this is something I notice all the time. You start with something accessible only to find yourself enjoying obscure 70s synth funk recorded on tape in someone's bedroom months/years later. Or similar.

It's why we recommend "Kind of blue" whenever someone wants to get into Jazz, which is difficult if you just randomly start...anywhere.


There's far more to musical creativity than chord sequences and time signatures. Blues is made interesting by the musical ability of the performers.


it's almost mystifying to me at this point that not everyone's on board with this idea. most of what i care about musically is captured poorly by the traditional notation of western classical music.


Only people who have never made music think in that way.


Many electronic music producers (including myself) share a similar sentiment. There's a lot to a carefully produced song that can't be fully encompassed by sheet music.


Well, I have made music, and I think in that way.

The mere score of a blue piece is just as good as useless for conveying the power of a particular blues performance for example.


The notation is just a model, an imperfect one (as all models are). That's been the case since Bach, if not earlier.


> Somewhere in between is novelty

Most 12-bar blues use a similar chord progression but have different melodies. Even the same song performed by two different musicians will sound different enough to be perceived as "novel".


.


It is, but that's not really contributing anything here, unless you're saying that diatonic scales and popular chord progressions are also shit.


I can't help but think "reduction of entropy is information is good" is the Shannon version of Feynman's "energy makes it go"


But that clearly isn't all -- I've listened to _No More Shall We Part_ probably a thousand times and it's still good.


I've been looking for new music to enjoy and this fits the bill - thanks for posting.

Israel Kamakawiwo'ole has similar vocal attributes, also very enjoyable.


This is literally the closest I've ever come to encountering another Nick Cave fan "in the wild".


And white noise makes me feel calm.


Even lovers of serious music have certain favorite recordings that they listen to repeatedly, even though those pieces have been recorded by dozens of others.

Pop music is repetitive and predictable, yet, ... well, enough said there.


There's the abstraction of music, and then there's music.

Just because we can transcribe an audio recording into 12 tones, and 16 divisions of a bar, doesn't mean that's all the information it contains. There's a whole lot more.

There's little variations in pitch, tone, dynamics, etc. All the stuff that separates a great recording from a lifeless snooze.

My theory is that pleasing music falls half way been the predictable, and the unpredictable. So if the beat is too predictable, the artist can always compensate by using an unusual melody... etc. But many of the ways an artist can add unpredictability can't be expressed with traditional notation.


The difference is the space of time between experiences.

I'm a "lover of serious music" and I do have some favorite albums that I come back to every now and then. But I don't listen to the same song in a constant loop on repeat. That would be dreadfully boring.

Similarly, most people would not enjoy a song consisting solely of a single measure repeated verbatim over and over again. Even the most repetitive music has some variation.


Not a single measure, but this one must be at the limit of how repetitive music can be while still being interesting (at least for a few minutes, for me): https://m.youtube.com/watch?v=RY26KNRhbOA


Which makes me think - maybe algorithmic composition would work better the other way around - start with one or two short phrases and repeat them, adding more variation, and different types of variation over longer timescales.


Beethoven would agree :-)

On a more serious note -- yes, but even this would become boring quickly despite the variations. This problem has been solved by:

- using different themes for different sections, or

- intertwining two or more such themes for contrast within the same section.

Prime examples of this approach are Beethoven piano sonatas.


Reminds me of The Antikythera Mechanism by BT (This Binary Universe) [0]. Each "loop" sample isn't because it's subtly different from the previous one, and when a phrase loops back again, the context changed, creating progression all along the song.

I seem to recall I read somewhere that (part of?) the album was painstakingly made using Supercollider [1], but can't find a proper reference.

[0]: https://www.youtube.com/watch?v=dKcAfkkipKU

[1]: http://supercollider.github.io/


The first track on the album was made entirely in csound.

https://twitter.com/bt/status/330843540076769281


Haha that was it! Many thanks for the correction :-)


I was happy to see a fellow BT fan on here :)


https://www.youtube.com/watch?v=mjnAE5go9dI

Indeed, even subtle variation can be enough to turn something "musical".


The decaying of the audio is beautiful. I hadn't heard of this project before, and now I have 5 hours of ambient music to code to today, thanks so much for sharing!


Depends on the personality and relationship with music. In the past I had gone through periods of listening to the same two or three tracks over and over again, day in and day out while driving to and from work.

I play music, which requires listening to yourself practice the same things over and over again. Sometimes just a couple of bars. I like it.

Even if you get bored, that is not the same thing as the music suddenly sounding "bad". It sounds exactly the same.


I also play music and understand what you mean by practicing the same things over and over again.

But when you are practicing, you rarely play it exactly the same each time. Otherwise, what's the point of practicing? Hopefully, it sounds a bit better every time you go through it.


This does not explain why there is a semi-universal experience that certain chords sound "happy" and other sound "sad".


actually, here's some recent research suggesting that the experience is less universal than is often assumed (or at least, much more dependent on culture, and less so on basic universal human biology, which is what i assume people are talking about when they say universal here):

http://arstechnica.com/science/2016/07/the-jaws-theme-might-...

http://www.nature.com/nature/journal/vaop/ncurrent/full/natu... ("Indifference to dissonance in native Amazonians reveals cultural variation in music perception")


I find it remarkable how musical the final result sounds.

Actually I find it remarkable how unmusical it sounds- when studying music theory there are often very simple rules you're mostly meant to follow, yet I've never heard any computer generated music that sounds even close to passable. It seems surprising to me, I would have thought it would be easier.


I am not too surprised. I think these efforts are like trying to get computers to write essays by following established rules for writing style. They can follow those rules well enough, and so they don't sound like complete amateurs . . . but at the same time, they aren't saying anything.

Which is exactly how computer generated music sounds to me. Pleasant enough, superficially, but with no real content.


I think the missing element is repetition.

Human-generated music uses repetition a lot. If not in the melody, at least in the harmonic progression or rhythm.

This algorithm doesn't seem to attempt anything like thematic development or rhythmic drive.

I wonder how it would sound if, instead of looking ahead only one chord, the algorithm instead generates randomly up to a certain period, then repeats what it just played with some slight perturbations.


Although I admit I did not listen to the recording, what you're saying reminded me of this article on the key relationship of repetition to music:

https://aeon.co/essays/why-repetition-can-turn-almost-anythi...

In particular, the passage beginning:

"Can music exist without repetition? Well, music is not a natural object and composers are free to flout any tendency that it seems to exhibit. Indeed, over the past century, a number of composers expressly began to avoid repetitiveness in their work...."

The authors of the piece found that both amateur and highly knowledgeable listeners thought that modified versions of some pieces, in which repetition had been injected, were more persuasively musical.

And the larger point is the very deep relationship between repetition and music.


Ray Kurzweil seems to have made a passable song using a computer: https://www.youtube.com/watch?v=X4Neivqp2K4

Granted, I wouldn't say it's song of the year or anything, but it's definitely listenable. Does anyone know of his general approach in his software for writing music?


Does the computer generated music "compose" music based on music theory or by learning from other songs? Much like literature, music can be good because an artist tastefully broke the rules and typical norms. A song-writing algorithm full of rule breaking will not necessarily create a song that fits the bill of "good" music.


Even the basic rules turn out to be very complicated.

Core elements are often cliched, but getting from a cliched chord sequence to a complete piece/song that captures the imagination of listeners is very much harder than it looks.

I've seen an expert systems for generating counterpoint that used a grammar with more than a hundred separate production rules, and still failed to generate interesting bass lines.

Interesting music is extremely non-trivial. Conversely, trivial algos used in trivial ways reliably produce trivial output.


>an artist tastefully broke the rules

Tastefully. That's the biggest problem with computers.


Sure, that's the big problem in automation of many endeavours - we can automate things where it's objectively clear what's right and wrong, however if the criteria for "good X" is literally "X that humans would like" (e.g. all art) then you pretty much need a human in the loop, or try to simulate a human listener as opposed to a human composer.

If we had a magical black box oracle that could tell us that variation A is "5 good" and variation B is "5.5 good", then that would be sufficient to implement a system that makes a lot of great art. But we don't, and possibly can't without strong AI or something like that.


David Cope seemed to do pretty well:

https://psmag.com/triumph-of-the-cyborg-composer-620e5aead47...

IIRC the samples sounded good to me, but I have been unable to play them now to refresh my memory.


Emily Howell is the name of the program. Some examples:

Track 1: https://www.youtube.com/watch?v=QEjdiE0AoCU

Fugue: https://www.youtube.com/watch?v=jLR-_c_uCwI

Here's the first CD produced: https://youtu.be/A9XCexln6xY?list=PLUSRfoOcUe4a-4pXqqET9DkPn...


The fugue does a very good job of making me cry


> when studying music theory there are often very simple rules you're mostly meant to follow

Agreed. Though truly breakthrough music is often about knowing when and how to break those rules. Some of the most timeless songs are the ones where the artist made an unexpected deviation that bent the rules in just the right way. It's something that only the very talented can pull off without doing so by accident.


There are some interesting tidbits here, and I really like the table of consistency versus consonance!

But overall, isn't this saying that removing the randomness and applying known music theory is what makes music sound musical? Is there any insight that using the computer is uncovering?

Randomness produces too much motion, and also fails to establish pattern or theme. A set of random major chords still sounds very random, it doesn't progress and leaves the listener unsatisfied. So many attempts at computer music start from randomness and the proceed to remove it little by little with structured rules -- maybe starting from randomness isn't the right place to start?

Had a tiny epiphany about randomness recently when I edited a video with still photos in it and applied my computer's "Ken Burns" effect, where it zooms & pans slowly. The automatic version picks random start and end points, fairly close together, and the movement is slow and gentle. But I watched it and noticed it was very unfocused and adding unharmonious motion. Ken Burns is telling stories with his pans & zooms - zooming in to highlight a specific face he's talking about, or panning over to reveal a place. It was a pain to re-edit the video manually, and hand-animate every pan & zoom, but when I was done, I was completely shocked how much less motion there was, like an order of magnitude less movement. The randomness had just scattered everything and didn't go anywhere. That's what I'm hearing in these musical examples - Brownian motion - too much movement that doesn't go anywhere.


There was something else like this on HN a few months back, but I can't find it now. Like this generator, it generated music which sounded OK on a scale of seconds. Lacking any higher level structure, after tens of seconds it was clear the music wasn't going anywhere.

This is like generating sentences with statistical autocomplete. For a few words, the phrases sort of make sense, but that illusion disappears with more length. More high-level structure is needed.

Somebody will probably figure this out soon using deep learning and grind out background music for movies. Oh, right.[1][2][3][4] (Juke-bot may be a hoax.)

[1] http://juke-bot.com/ [2] https://www.jukedeck.com/ [3] http://www.athtek.com/digiband.html [4] http://tones.wolfram.com/generate/


That's what makes it sound "musical", not "good".

A lot in music breaks down to establishing patterns and breaking them. Simpler music, like pop music relies heavily on well known patterns, but at least the chorus usually has some element of surprise within the song.

My theory is, that people like music that is just a little (or for the more adventurous a little more) surprising.

With training in listening to music, as a musician or just as an ambitious listener, your taste will begin to widen to more complex music. This is because you begin to recognize its patterns (and only when a pattern establishes, you can break it to create suspense or surprise). Simpler stuff will become very shallow, because it's so predictable (just remember the shameful musical taste when coming of age).

Of course there is more to a pop song than harmony and melody, so even when your taste becomes more sophisticated you might like a pop song for its emotional appeal or some subtile complexity or depth the novice won't even notice.


Sort of a Lorem Ipsum in music.

Listening to good music (as a non-musician) is for me mostly about experience emotions.

Listening to this music leads immediately to be about identifying what rules the generator was programmed to follow. The stricter the rules (less random), the more music-like it may sound, but without any emotion. Except maybe comedy.


I first heard of Dmitri Tymoczko from his book A Geometry of Music [1], which I found to be very enlightening.

This is a cool demo, although I suppose it shows a bit more where the basic texture of western music comes from than songwriting. But what I find most interesting about his ideas is how he's able to fairly convincingly connect them to the entire western music tradition.

[1] http://dmitri.tymoczko.com/geometry-of-music.html (there's a link to Amazon from that page)


Some of these audio examples remind me of a long night my freshman year at MIT, when at around 7 AM, in delirium, my friend and I wrote a few hundred lines of code in Matlab, yes Matlab, to procedurally generate music by probabilistically moving from chord to chord based off of the frequency of such changes in real music. Added some bass for counterpoint and some terrible percussion synthesized from white noise and we had what we believed to be some real bangers.


And when you sobered up? :)


Quickly realized the true nature of what we had created. (That is to say, piping hot garbage.)


Evolutionary accident, perhaps. We have been conditioning for millions of years to detect dangers and distinguish among sounds of nature. Obviously, the sounds of bird songs and most of animal cries are in so-called major tonality, while sounds of nature like wind, raining, thunderstorm, etc. are belongs to minor. The swinging from one to another and variations on a theme are pleasant to us the way natural bright colors are more pleasant than dull ones. Rhythm is another main factor, and it probably has something to do with repetitive patterns of behavior, the physiological phenomena behind some trance states (which is related to sexual arousal that makes an animal numb) and rhythms found in sounds of nature.

But, of course, it has nothing to do with harmonic series and other man-made concepts of the mind.


I just bought a Roland JD-XI and not having an understanding of music theory I was looking for anything that talked about What Sounds Good and Why.

I'd like to read more without really diving into music theory too far.


If you want to make music, but don't care about music theory, get a book called The Art of Mixing.

It won't tell you how to write a song, but it'll start you with a solid foundation for music production. You can start by just finding loops and samples online and putting tracks together that way. Then you can start replacing bits with original music. Think of it like learning how to program by cutting and pasting code from stack exchange.

https://www.amazon.com/Art-Mixing-Recording-Engineering-Prod...


I enjoyed reading the HookTheory[1] eBook and got a lot out of it. It illustrates its numerous examples using a neat colored notation visualizer, and does a great job explaining melody and functional harmony. It plays simple piano midi to illustrate concepts using popular songs. There were quizzes every few pages to test your knowledge, which I found fun. There are some insights to be found here based lots of data they've gathered from analyzing popular song chord progressions.

My only complaint is that the audio playback has annoying zero-crossing pops.

[1] https://www.hooktheory.com/music-theory-for-songwriting


It's an endlessly expansive topic, but usually people start out with:

1) Lessons or books on how to play a specific instrument. The good ones will cover the most relevant basics of music theory--stuff like chords, scales, keys, and voice leading.

2) Learning how to play your favorite songs. Copying performances and then experimenting with variations on them is essential for getting an intuitive sense of how it all works.


Melody in Songwriting is a fantastic book. I can't recommend it highly enough. It covers rhythm, phrasing, melodic stability, melody as it relates to harmony (chords), melody as it relates to bass, melody & harmony & bass. It's especially good when paired with the Coursera songwriting course.

The author covers lots of ground this algorithm seems to ignore. I wonder what sort of music a program that used these suggestions could make.


Try books on "design" topics more generally. Visuals, storytelling and UX are unrelated to music at the surface but they allow you an in to grounding your abstract thoughts and finding a thing that the rest of the work is derived from. It's the thing artists always talk about in interviews where they're like, "it was about this one time in my life where..."

That may seem disconnected from the sounding good part but it's what gives you a core to build on. You can take the thought and express it in a very simple, low-fi, minimalist way, or go all out and build a huge arrangement and smother it in production. That's something that is better to decide upon by design than by trial and error.


For the general understanding please check "How Music Really Works" by Wayne Chase.

Given your instrument choice you might also be very well served by the "Music Theory / Harmony / ... for Computer Musicians" series by Michael Hewitt.


I started thinking about computer music generation when I met Peter Langston (http://peterlangston.com/Papers/amc.pdf) who was contracting with Sun doing audio work on the original 'project green' star-9 device. There were many places where being able to play a tune of a typical "flavor" but royalty free was required. I just thought it would be nice to have continuously variable music on hold.


for most of the examples on that page, the first feeling i had was that it sounded like a naive/paint-by-numbers version of free jazz/modern classical/a tense passage in a movie score.

i thought the last clip was actually enjoyable, the busy clattering noisy one with the effects and the variety of instrument sounds. there's plenty of weird electronic music i listen to that's in that ballpark at times.

these clips would all make fine sample fodder.

i appreciate the attempted disclaimer at the end ('That is, "good" to typical Western listeners'). but i think a lot of typical western listeners don't realize how much things like texture (timbre, whatever) and other sorts of things not easily captured by western classical music notation matter to them. for instance, a lot of what makes old jazz sound like old jazz is that great warm scratchy sound that the recording technology of the time necessarily imprinted on the original recordings. likewise, if you scored autechre or aphex twin and got an orchestra to play it, it might sound interesting or enjoyable, but it wouldn't sound anything like autechre or aphex twin. and of course, the intonation and cadence and syllable stretching of well delivered rap vocals or well executed vocal sample chopping is nothing if not musical, but those things aren't easily transcribed by the notation in question here either. but those are all things regularly enjoyed by a wide swath of western listeners. so i think the disclaimer should be more like 'That is, "good" to the Western classical music establishment.'


One hypothesis is that music has evolved to be "difficult" to compose, because too much really good music would be bad for us: http://whatismusic.info/blog/OnTheDifficultyOfMusic.html


I wonder if the people over at Brain.fm have seen this? After using the service for a while, I've found it fascinating how they've managed to turn "randomness into something recognizably musical" (I'm quoting Dmitri Tymoczko) that I can then use to focus with.


It's a large library of human-recorded loops, mixed and matched together at random. There's essentially nothing in the way of automated music composition.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | DMCA | Apply to YC | Contact

Search: