

Compositional Music Composition - noelwelsh
http://underscore.io/blog/posts/2015/03/05/compositional-music-composition.html

======
tessierashpool
Chris Ford did a terrific video along the same lines, in Clojure:

[http://www.youtube.com/watch?v=Mfsnlbd-4xQ](http://www.youtube.com/watch?v=Mfsnlbd-4xQ)

~~~
sgrove
It's a mesmerizingly good talk/performance for someone as foreign to (making)
music as I am.

------
mazelife
This seems to be similar to Euterpea, a domain-specific language embedded in
Haskell for computer music composition:
[http://haskell.cs.yale.edu/euterpea/](http://haskell.cs.yale.edu/euterpea/).

I'm curious what the differences are, either in capabilities or in overall
goals, if anyone has looked at both.

~~~
noelwelsh
I believe Compose is loosely based on Euterpea, but is much less full featured
at this point in time. Euterpea probably can't parse guitar tab, though (a
very fun feature that was recently added to Compose).

~~~
davegurnell
I dug into the Haskell Art of Expression
([http://www.cs.yale.edu/homes/hudak/SOE/](http://www.cs.yale.edu/homes/hudak/SOE/))
a little to produce Compose. Not knowing much Haskell (yet), the influence was
superficial, but I've no doubt that the similarities here are no coincidence.

------
kragen
This is pretty similar to the way synthgramelodia
([http://canonical.org/~kragen/sw/aspmisc/synthgramelodia.py](http://canonical.org/~kragen/sw/aspmisc/synthgramelodia.py),
although git cloning that directory is probably the easiest way to download
it) composes its stochastic music
([http://canonical.org/~kragen/sw/synthgramelodia/](http://canonical.org/~kragen/sw/synthgramelodia/)).
I did it in Python and didn’t use operator overloading for the combiners, in
part because the melodies are stochastically composed DAGs rather than
manually-composed.

It has the same primitive melodies of a rest and a note, and also the
concatenation and concurrent ways of combining them, but with some additional
details which make a big difference:

0\. Primitive notes and rests are all the same length and (for notes) pitch
and volume — one beat, A2, and 0dB; to get other pitches, lengths, and
volumes, you have to use combining forms.

1\. Concatenation and concurrency adjust the lengths of the combined melodies
to be equal, so that the duration of a melody is necessarily a power of two.
At different times I've done this by different combinations of repeating the
shorter one, speeding up the longer one, and slowing down the shorter one.
Currently the code repeats the shorter one in all cases, but I think this
doesn’t work as well as some of the previous combinations I’ve tried.

2\. Concatenation drops the volume of the second half of the resulting melody
by 2dB, resulting in a sort of fractal loudness contour with clearly defined
measures.

3\. Concurrency raises the pitch of one of the melodies by an octave, so that
you tend to have similar motifs in different octaves.

4\. There are also Transpose and Louder combining forms, which make all the
notes reachable. Transpose drops the pitch of a melody by a perfect fifth,
while Louder raises its volume by 3dB.

Synthgramelodia needs a lot of work, but as you can see from the oggs I
randomly dumped in that directory years ago, it has already produced some
things that are kind of listenable. One problem it has is that it tends to
produce scores with very large numbers of inaudibly quiet notes in them, and
then it does nearest-neighbor resampling in interpreted Python, so it runs
very slowly.

------
tessierashpool
A couple things about this post:

first, the NoteOn/NoteOff dichotomy comes from MIDI. it's pretty reasonable to
argue that any other attempt to model music would involve sounds both
beginning and ending, of course. but when you're designing sounds on a
synthesizer, the terminology is Attack (for the start) and Release (for the
end).

second, the author talks about skipping dynamics, to keep the code simple. so
the code implements a subset of what you'd get with traditional music
notation, i.e., the dots and squiggles people are talking about when they say
they can read music. but "the score is not the music" is a saying among
classical performers (iirc), and modern music software (e.g. Logic, Live,
DAWs) uses MIDI to express dynamics in much higher resolution than scores can.
and even then, "reading" the GUI of a DAW is not going to give you anything
more than a mild approximation of what the sound is going to be.

I did a talk about programming drum rhythms in CoffeeScript where somebody
asked me about jitter, aka swing, which is to say, a mechanism to take a
programmed drum beat and make it feel like a human played it. everyone gets
that difference subjectively, but there's also some research on what, exactly,
in a numerical sense, that difference is.

the best "swing" implementation, the Roger Linn system used in MPCs, the Linn
Drum, the Tempest, etc., just offsets the 2nd 16th note in every 8th-note
"window."

[http://www.attackmagazine.com/features/roger-linn-swing-
groo...](http://www.attackmagazine.com/features/roger-linn-swing-groove-magic-
mpc-timing/)

but with an actual human drummer, the moments when they go too fast, relative
to the absolute beat, are offset by moments when they go too slow. (all of
this is at nanoscale timing.) to my knowledge, nobody's really gotten
algorithms together to express or emulate this yet, although of course modern
DAWs have a lot of interesting options, e.g., extracting the "groove" from one
sample and applying it to another.

my point basically is that the deeper you get into this, the deeper you
discover it goes. traditional musical notation is a graphic DSL full of cruft
and stupidity, but it takes a lot of work to cook up some code which can cover
everything that DSL covers, and if/when you get there, you just discover
endlessly more levels of detail to explore.

this is awesome if you want to explore it, but agonizing if you want to get
from installing a piece of software to making amazing music with it right
away. so anyone who makes a library like this has to balance making it
comprehensive against making it easy to get started, and in that sense, it's a
lot like designing a programming language.

~~~
TheOtherHobbes
That's because traditional music notation isn't a DSL - it's a form of
shorthand that gives you the bare minimum of information needed to create an
interpretation.

And if it's something like figured bass or a jazz lead sheet, it's not even
close to what's actually performed.

So if you take it literarily and think it's some kind of definitive
description, you miss about 90% of what's going on.

> my point basically is that the deeper you get into this, the deeper you
> discover it goes.

Exactly. People who know more about computers than music keep reinventing this
list idea over and over.

I've literally lost count of the number of times the same concepts have
appeared in different places over the decades.

The people who 'invent' them always seem very pleased that they've completely
mastered a whole new problem domain.

Sadly, no. Expressive music isn't about lists of notes.

Expressive music representations and creative music theory are full-scale
industrial machine learning and AI problems.

It's _hard_ to make music that more than a hundred people, not including your
friends, will pay to listen to more than once.

For me, that's the absolute non-negotiable base test for models that claim to
represent music in a useful way.

~~~
noelwelsh
Note that Compose is designed to be a teaching aid and a fun toy, not to
produce studio quality results.

~~~
tessierashpool
yeah, I understand that. and I think it's a cool library. the + and | thing is
neat. but, fwiw, I do want to produce studio-quality results. :-)

~~~
davegurnell
Thanks for the comments -- agreed with everything you say. I'm also a musician
and the intersection between music and code is really interesting to me.
Compose is just a toy, but I'm interested in learning about musical systems
that are not.

