
Growing Music: musical interpretations of L-Systems (2005) - bjourne
https://www-users.cs.york.ac.uk/~susan/bib/ss/nonstd/eurogp05/index.html
======
jancsika
The drawings are more sophisticated than the generated music.

For example-- there's a musical progression in the 2nd movement of Mozart's
clarinet concerto where he has a fairly well-trod major key sequence
constructed of a rising fourth in the bass that gets sequenced up in steps
until hitting a very obvious cadence. Totally humdrum stuff. In fact you can
hear many composers during Mozart's time and after using this progression.

However, Mozart adds a trick-- at the end of each iteration he inserts a
_descending fifth_ bass pattern for a minor key that prepares the next step of
the sequence. These two sequences progress in lock-step to the cadence, as if
there were two completely independent progressions that are interleaved. Think
the melody of "Baby, it's cold outside" but with harmony.

The drawing for such a musical game is much more basic than what's shown in
the examples. But the musical upshot-- e.g., what you _hear_ as a listener--
is in a completely different universe of sophistication than the musical
examples given.

Yet the history of music is absolutely brimming with examples like the one
from Mozart that I gave, from composers of all stripes. I think the process
outlined here is too low-level to generate any kind of musical pattern of
interest.

~~~
ptah
what stops a composer from doing a mozart and taking the output from this
system and modifying it to make it more interesting? my point is that systems
like this can build starting points for composers to take further

~~~
p1esk
_systems like this can build starting points for composers_

They are. See for example [http://aiva.ai](http://aiva.ai)

------
pera
Here is an example of a Max/MSP patch that uses l-systems:
[https://www.youtube.com/watch?v=Z3hoAuS3qzg](https://www.youtube.com/watch?v=Z3hoAuS3qzg)

If you are curious about algorithmic compositions you may want to check the
British duo Autechre. Their 2013 EP, _L-Event_ , is probably a reference
between l-systems and eleventh (interval):
[https://www.youtube.com/watch?v=sKtrcF_Y16Y](https://www.youtube.com/watch?v=sKtrcF_Y16Y)

------
nineteen999
This is a fascinating article but I can't help feel the cheesy synth sound
with the vibrato and echoes used to render the music detracts from the
presentation and interpretation a fair amount. A fairly plain piano sound
would have worked better in my opinion. A subjective matter of course.

------
qwerty456127
How did you render the music? I want to experiment with algorithmic music
generation too.

~~~
AtomicOrbital
dunno what they used however the simple of it can be where your code
synthesizes the audio curve ... just a time series of floating point numbers (
or integers ) which is analogous to the wobble of the microphone membrane or
your eardrum ... then to render this there are two fundamental notions
regarding this raw audio --- bit depth and sample rate --- bit depth drives
how granular this curve gets digitized typically you use two bytes ( 16 bit )
to store each point on this audio curve --- sample rate is simply how many of
these audio samples you store per second ( CD quality is 44,100 samples per
second ) ... this level of raw audio is called PCM ... to render it most
easily simply output a 44 byte header which outlines the audio spec followed
by the payload which is this PCM audio where each audio sample out putted into
the output file is saved across two consecutive bytes, meaning a given audio
sample ( a point on the audio curve ) consumes 16 bits of memory which gets
saved as two bytes in the output file ... then you have your own WAV file
which can get rendered using command line tools like ffplay, aplay, vlc or
whatever

~~~
qwerty456127
Thanks. But what I am looking for is to render an array of notes to PCM (or
whatever). I've tried defining the notes as frequencies and using a waveform
rendering library (in Python) but it's very slow and clicky - a wave of a
particular tone gets cut abruptly right before the next one starts and that
produces a click. I need something more intelligent to sort of crossfade
subsequent tones into each other or so. Perhaps I should just use a filter...

~~~
magicalhippo
Isn't this a good case for MIDI? For example using something like
[https://github.com/nwhitehead/pyfluidsynth](https://github.com/nwhitehead/pyfluidsynth)

