Firstly, this sounds quite a bit like the Symphony For Dot Matrix Printers. If you haven't heard it, I urge you to find a copy. I've linked to some video below, but it is significantly better as an album. It has all the compositional richness of a classical symphony, performed in realtime by an orchestra of printers. I dare say, it is my single favourite musical work.
Secondly, I'm reminded of the cassette tapes that computer programs used to come on (Commodore 64, etc). My friend had a box full of these cassettes, but I can't remember what machine they were for. We used to listen to the tapes and jam along with our laptops. It was a bit like having a garage band, I guess, but for tech-heads.
#1 is my favourite, but #2 is also fascinating. #1 has more immediate textural/tonal exploration, while #2 is a more gradual evolution. Feel free to skip around the videos to get a sense of the breadth of sounds in these symphonies.
I do find this a bit of a stretch: "Leaving the constituent elements untouched, the process imposes a new order upon them, reorganizing the sounds along a musical structure." There are parts in there that are almost certainly slowed after recording. The deep thud of the carriage changing direction (starting at 1:47 in video #1) is one such sound that I believe has been slowed.
(As an aside, bemmu's is creating a 16-bit signed .WAV data from 8-bit signed functions, so it isn't doing it quite right either. Why not just create an 8-bit WAV data block? I've done so here: http://a1k0n.net/code/audiogen.html)
Those are really cool! Is there any way to explain what the different symbols and operators are doing? I'm a web developer and have a very vague understanding of a little bit of the code but am also a musician and would love to understand a little more so I could try to make my own (without just randomly typing numbers and symbols).
For example, is there some correlation between adding a ">>" or "%" operator and how that effects the timing? I saw some of the super basic formulas referenced in the blog post, and see that t[asterisk]4 generates a tone, t[asterisk]5 generates a higher tone, etc. -- is there a way, for example, to put those one after the other to compose something (or is that defeating the purpose of this exercise)?
Maybe another way to ask the question, is in your 3-part harmony example, do certain portions of the code correspond to certain parts of the generated tone?
Or is this all just totally random, trial-and-error stuff?
Some of the really compact formulae are trial-and-error stuff, but it doesn't have to be. You can make an arbitrarily complicated software synthesizer with this thing, and if you just want to compose algorithmic music in a "constructivist" way, you can do something like this:
So I've defined a function "SS" which stands for sawtooth wave applied to sequencer -- where a sequencer takes an ASCII representation of notes and a speed and a base octave, and converts them into relative note frequencies, and then the sawtooth wave synth is done by 31&t*<multiplier> -- which is a number that cycles between 0 and 31 at a frequency determined by the multiplier. Just typing "31&t" into the code will give you a sawtooth wave at 8000/32 = 250Hz which is either a flat middle C or a sharp B below that. 8000 being the sample rate, 32 being the number of steps in the wave generated by "t&31".
So one of those is synthesizing a bassline and the other one is an arpeggio that starts and stops when t&4096 is true, which means it's "off" for about half a second and "on" for about half a second (8000/4096 to be precise). Add the bassline to the arpeggio and you get both voices at once.
My version has a textarea just so I could put things like that together. The OP, however, is less interested in the constructivist approach and more in the trial-and-error approach where you find something that can be implemented in 10 bytes of assembler and it magically produces a symphony.
I'm not a musician, so this is just my understanding of the changes as a programmer and after trial and error.
The '<<' and '>>' are bit-wise shift operators.
The '>>' is a right-shift which is equivalent to dividing by a factor of two. e.g. t>>n == t/(2^n) This can be used to generate a beat.
The '<<' is a left-shift which is equivalent to multiplying by a factor of two. e.g. t<<n == t[asterisk](2^n) This tends to shift the pitch
The modulo operator, '%', has very little effect, in combinations it can generate a looping set of ranged variance, which is particularly pronounced when combined with a division operator to create clear stepping. It monotonically increases to n, and then falls back to 0 and starts increasing again. e.g. t[asterisk](t%16/64) This increases through 16 phases, holding for 4 periods.
Addition, '+', and subtraction, '-' haven't been very useful for me except when using sin(t) or cos(t) which oscillate between -1 and 1 resulting in some interesting changes to the beat. e.g.t>>4+cos(t) is shifting between t>>3 and t>>5.
The AND '&', OR '|', a short circuiting OR, '||' come in during composition. Plain OR tends to generate better results, but that's trial and error talking. Honestly, too tired to think about this.
BTW, I think the comments saying that aplay or pacat are equivalent to /dev/audio are slightly mistaken. /dev/audio is μ-law, like aplay -f MU_LAW. aplay defaults to linear unsigned 8-bit. With the sine wave above, the difference is very noticeable. With the various distorted sawtooths and white-noises in the original videos, it's harder to tell, but I'm reasonably sure that they're using μ-law, not linear.
BTW, aplay refuses to play 8-bit audio on my Logitech USB speakers. So I ended up using sox to convert: