Hacker News new | past | comments | ask | show | jobs | submit login
Generating More of My Favorite Aphex Twin Track (medium.com/metalex9)
451 points by sajid on March 26, 2019 | hide | past | favorite | 216 comments

There's a nice website with a clean UI called 'Music for programming', put together by a guy who curates playlists designed to compliment anyone who wants to buckle down and do deep focus creative work for a while. I've been through each of his 53 playlists several times. 'Asiatsana' is one track in one playlist, but the music overall is very much in keeping with the mood and tone of Asiatsana.


The curator of Music For Programming [0] (which is fantastic, yes!) also put together the very silly collection of mixes called Businessfunk [1], which are also great coding music.

And if you try Businessfunk and decide you have the stomach for it, I recommend following up with the Komputer Cast mixes by Com Truise.

[0] https://musicforprogramming.net [1] http://datassette.net/businessfunk/ [2] http://comtruise.com/kc/

Datashat also has a mix for the Near Mint web radio program which is essentially Businessfunk No. 4: https://mixcloud.com/Resonance/near-mint-8th-march-2016-data...

Notably, Businessfunk mixes are made of ‘library music,’ i.e. stock musical clipart. Gotta say, I've lately heard a bunch of rather good music from libraries, e.g. the entire soundtrack of ‘IASIP’ (though that one is from “production music” libraries, i.e. more specialized for film, television and the like).

That is linked to from music for programming as "Enterprise mode" lol.

Thanks for this. I listen to a lot of Com Truise on Spotify, but had no idea about the Komputer Cast stuff!

All of the suggestions here seem too distracting for me unfortunately.... :-/

These three bands work well for me when coding (or just "space ambient" on youtube):

- AES Dana: https://ultimae.bandcamp.com/album/perimeters

- Carbon Based Life Forms https://carbonbasedlifeforms.bandcamp.com/album/hydroponic-g...

- Solar Fields: https://solarfields.bandcamp.com/album/movements-remastered

Hearts of Space [1] has been doing ambient music for a long time, and it’s great for programming and tends to be very low distraction.

[1] https://hos.com

What exactly is this? They are not a band, are they?

It’s a radio show, with an associated streaming service/record label.

Many public radio stations carry it on Sunday nights, even the “traditional” format ones that are mostly classical music and news. It was my first introduction to ambient music, and the host’s voice gives me a mix of nostalgia and panic (as it meant I needed to be getting on with my homework).

Good sounds :) Do you know Future Sound of London? Try their Lifeforms and Environments (I, II, III, ...) albums.

FSOL is great for washing the dishes to, but I find they can be a bit too dissonantly clicky for coding, reading etc. But don't get me wrong, they're one of my favourite outfits and used to play them all the time when I did radio.

For me I'd say Tycho is a bit more my bag when it comes to a tippy-tappy-loads-of-code session.

Such a well curated list. I listen to it all the time at work.

I was already intimately familiar with many of the artists featured (Aphex twin, oneohtrix point never, tim hecker) but have definitely discovered some new ones through it too.

To piggy-back off this, the record label Silent Season is full of great ambient/downtempo/spacey music for programming. Was listening to this today: https://shop.silentseason.com/album/the-waves

Did you check out the wandewelle album that came out last year? bliss!

asiatsana reminds of C418's music, mostly the minecraft stuff but there is alot more.

k, I fixed it, thanks.

It's really sad to see music that I grew up on and had many life experiences around get simply reduced to "music for ____".

Just because it can serve some purpose for others doesn't have to take away from the meaning it has for you

A lot of the music I love from my younger years, I can still listen to either actively for enjoyment or passively for concentration. It's entirely up to me.

I love Music for Programming!

For musicians and devs, it's a good hint that Markov chains are "good enough" for most interesting music applications, ML/AI doesn't typically yield better results. Happy to see this on the front page!

I don't disagree with this observation, but I wanted to mention a counterpoint.

Eno's "Music for Airports" famously uses a system of multiple tape loops that produce sequences of different periods. As you listen, you can hear phrases that occur nearly together and then later, well-separated in time, as the periods of these loops go in and out of phase:

"One of the notes repeats every 23 1/2 seconds. It is in fact a long loop running around a series of tubular aluminum chairs in Conny Plank's studio. The next lowest loop repeats every 25 7/8 seconds or something like that. The third one every 29 15/16 seconds or something. What I mean is they all repeat in cycles that are called incommensurable — they are not likely to come back into sync again."

This interaction of periods will have long memory. (If the tape lengths are L and L+d, for small d, then the repeat time could easily be as long as L*(L/d), and even longer if d does not divide L evenly.) Thus, it is very different from what you get with a Markov chain.

There was an interview with Eno in Wired in 1995 [1] that i still think about sometimes. Amongst other things, he talks about lifting himself to the next meta-level of composition, where he doesn't write music, he writes rules for a box which makes music:

Q: If I could give you a black box that could do anything, what would you have it do?

A: I would love to have a box onto which I could offload choice making. A thing that makes choices about its outputs, and says to itself, This is a good output, reinforce that, or replay it, or feed it back in. I would love to have this machine stand for me. I could program this box to be my particular taste and interest in things.

Q: Why do you want to do that? You have you.

A: Yes, I have me. But I want to be able to sell systems for making my music as well as selling pieces of music. In the future, you won't buy artists' works; you'll buy software that makes original pieces of "their" works, or that recreates their way of looking at things. You could buy a Shostakovich box, or you could buy a Brahms box. You might want some Shostakovich slow-movement-like music to be generated. So then you use that box. Or you could buy a Brian Eno box. So then I would need to put in this box a device that represents my taste for choosing pieces.

[1] https://www.wired.com/1995/05/eno-2/

I spent a huge amount of time playing around with a demo of SSEYO Koan Pro which Eno used to make Generative Music 1 [1] and which evolved over time into Wotja by Intermorphic [2]. It's about time I tried Wotja out!

[1] https://en.wikipedia.org/wiki/Generative_music

[2] https://intermorphic.com/wotja/

This should be doable? You could feed your entire catalogue into the machine, and have it start generating music. For every pass, you could listen to the created piece and tell it "I would not make anything like element X, because of [specific reason], so take that out."

Enough of these passes and detailed removal of elements you wouldn't create, and you should get closer and closer to a machine that would be your musical clone.

This is missing the complexity to a lot of music. Swap "music" in your post for "books" or "software" or "human interactions" and maybe the skipped complexity becomes obvious. What if the key ingredient to someone's music is a long over-arching structure throughout the song of highs and lows following the emotional progression of a predictable 3-act story? The software would need to be capable of recognizing and synthesizing and tuning that kind of high-level element and surely many others like that. Even songs that are made up of a few repeating tracks overlaid probably have high-level qualities like that the artist tweaks the setup until the end results meet the high-level goal they were aiming for.

I'm sure a musician who was also a programmer who understood their own tastes and creation process well enough could currently create something like this for a specific genre, but I think we're very far from a one-size-fits-all generator.

You need a machine description of "element" - which turns out to be a lot harder than it sounds.

How well would it cope with the evolution of an artist's ouevre; would it recognise that certain sounds are from a certain period?

> What I mean is they all repeat in cycles that are called incommensurable — they are not likely to come back into sync again.

I remember a few years back there was some quick guide to visual design that I saw that recommended this. To provide what appeared to be randomness over repeating samples (or at least prevent easy pattern matching in the brain which is distracting), take three images that can of different prime number lengths, then repeat each one and put them next to each other.

I believe the example used was the ruffles in a stage curtain, where there were a few layers. Each layer was a repeating image of length 3, then length 5, then length 7 (but the image itself had some variations between ruffles within it). You won't get a point where all the images all stop and start at the same point (a dead giveaway of the pattern) until length 105.

I remember that article too! I used the same technique to make a convincing, non-duplicating flame-flicker algorithm for an led. 3 variables, all looping to different prime numbers and the sum of the 3 variables was the led intensity.

Here's the article you're referring to - https://www.sitepoint.com/the-cicada-principle-and-why-it-ma...

I actually figured there was a good chance someone here would reply with it, since I believe I originally found it here either from a submission or a comment. Thanks!

Hm, it works well for this particular song/the genre of generative music but it easily hits limits with many other types of music people might want to work with. More advanced ML/AI techniques are needed to reason about sound in a semantic/latent space, which a stock Markov model will not do.

I feel like the Markov chain could also apply go higher level song formula, with probability weights for choruses, verses, bridges, breakdowns, solos, etc.

I prefer to just write a bunch of riffs and see where they grow organically, but it seems the chain would be a great tool to piece some of those ideas together or give an indicator of where they could go when a creative block is hit.

I think it should be more interesting to create a formular that sounds nice. In effect, that's what's done when creating structures "organically". Hence a lot of music is formulaic, but the problem with that is that the formular is predictable on purpose. It's not strictly simple, not everyone can do it freely, but the more complexity, the harder it is to stay within any given constraint. Vice versa, adding a lot of complexity just to satisfy a weird constrained, instead of relaxing the constraint can seem overloaded. The latter notion may be a meta constrained, so you get a dynamical system. The dynamics of a Markoc Chain are either limited or hard to control as far as I can tell, as its a random process. But the expressiveness is king. The way I imagine controling that would involve human interpretation in a trial and error loop, and therefore a lot of bad music. This, and the imitation of prior art is a given for unplugged music, too, of course, but the material to draw from is so diverse that a markov chain is more likely to pick up all the bad parts, too.

I mean, most music is terribly dull and meaningless in the grand picture of things. The selection of input would be more like the job of a DJ. I don't know how much creativity can be fit into programming the chain, what features to pickup. It would probably just be a part of a bigger workflow, starting small, creating short samples, arranging those, and cetera.

For the story it helps that Aphex Twin is rather random to begin with (e.g. having created a song by reverse fourier syntheses over a gray scale picture, creating the sound to a desired spectrogramm). The irony is appreciable, though comendable if someone likes the result.

Coincidentally working on this right now...

Let me know how that goes! I know as a songwriter it'd be a great tool to have on hand.

What you describe is exactly what I'm building - https://songsling.io

Didn't want to explicitly promote it here in the thread, but I'm building it to serve as a general music-content engine for artists (also for my own music, let's be honest) that will let you upload composition parts and the player will put them together on the fly. It already has interactivity built in where you can serve a different version of a track based on changing inputs you connect from your Internet presence etc. Plus, you can DJ with it, it supports smoothly changing playback speeds. It needs to happen.

One use-case that immediately jumped to mind for an API plugin is integration with popular games for game streamers to use. Something that detects that you're in the middle of a gunfight vs just chilling in the lobby, and selects/modifies the music appropriately.

Some video games do this automatically with their in-game music, but not all, and many streamers prefer using other music to avoid it getting stale.

Just wanted to say that this looks super dope. Love the aesthetic as well.

> it's a good hint that Markov chains are "good enough" for most interesting music applications, ML/AI doesn't typically yield better results

Are Markov chains not considered ML/AI? I would consider them to be a standard part of the field

On a side note, saw a nice write-up celebrating the recent 25th anniversary of Selected Ambient Works Volume II:


I feel like that album has a lot of potential for generative experiments. Admittedly, the album has an over-arching tone of eeriness throughout, which isn't something I want to listen to while I work most days. Maybe it would be inspiring to game developers working in the horror genre :)

Actually, speaking of games with Aphex Twin's music, I'm not sure how many folks here recall the Dreamcast classic 'Rez' - a synesthesic, psychedelic rail-shooter which actually featured a track of his, under his legal name 'Richard D. James'.

If you guys haven't played it, seriously give it a shot. I believe there is a re-released version on XBOX and PC, though I can't speak to the quality of the ports, as I still play my Dreamcast almost daily. ;)

Rez is brilliant and I remember my dreamcast fondly.

This page seems to say that his music wasn't used in the released version. https://www.unseen64.net/2008/04/10/k-project-rez-prototype/

Okay, that seems to directly contradict my actual memory of the game! I'm absolutely going to have to confirm that. I distinctly remember reading 'Richard D. James', and recognizing the name, as I was a massive fan of 'Drukqz'.

Strange how memory affects us.

EDIT: And whom lucky enough to own one does not remember the Dreamcast fondly? ;) Crazy Taxi, Quake III, Skies of Arcadia, Ecco, THPS2 - I'd argue the ratio of quality software to shit software was literally almost 1:1, compared to, for instance, the PS2, where you might get 1 great game for every 10.

I recall there's a Dreamcast disc image floating around somewhere with a different music selection. Depending on where you got your copy of Rez, who knows what you've got. :)

I just pulled my dreamcast out to play some of the games with my son. It's still a great system. There are some low quality games, but it definitely feels like you have to seek them out; as opposed to the shovel ware feeling I get on the PS1/PS2. Looking at the released game list though, I feel like some of these titles were probably good at the time, but won't hold up; but many of them are still as good as they were. VGA output is pretty nice too.

All the ports have been fantastic. The PS4 version supports PSVR, and the experience of Rez in VR is quite magical tbh.

Tetsuya Mizuguchi, the producer of Rez, also worked on Tetris Effect which has an amazing soundtrack. Tetris Effect feels like a weird cousin of Rez btw!

>> The PS4 version supports PSVR

Oh, fuck off. That's amazing. Colour me almost wanting to grab one. Wish it had a Vive port.

I'm not a fan of Tetris. I like games that have a very distinct 'beginning and end of the level' feeling.

Check out Rez Infinite on Steam. Seems to support Oculus and Vive. (https://store.steampowered.com/app/636450/Rez_Infinite/)

Rex Infinite is the only game that I strongly prefer to play on a Vive over a Rift. The Rift is more comfortable, but the Vive has a higher peak brightness. If you crank the settings in the game, the difference between headsets is significant.

Ps4 vr port[1]. I say do it. I say it's the only thing with having in vr. I say a bunch of stuff much of it more nonsensical than this. You'll know if you need ps4 vr now or if you don't. There is no trance vibrator or game girl advance review.


[Edit] Eh, lots of love lower in the thread. Have this instead...


I played Rez on PS2 and it was amazing. Looking forward to buying Rez Infinite on PC.

One thing I like to do is combine the tracks from each disc, so I listen to order like track 1, 13, 2, 14, etc. Most of the beginning tracks are less creepy.

Also there is a "missing" 19 track you can preview on his site: https://aphextwin.warp.net/release/68148-aphex-twin-selected....

I do the same, small world. I find that DrukQS benefits from re-ordering the tracks even more than SAW II.

As mentioned in that article, the eeriness comes from him creating the album through lucid dreaming, allegedly. Here's the original interview about that, its pretty fascinating:

Interview with David Toop, March 1994, The Face

Broaching this subject of dreams, he becomes animated and talks a long streak. "This album is really specific," he says, "because 70 percent of it is done from lucid dreaming... To have lucid dreams is to be conscious of being in a dream state, even to be capable of directing the action while still in a dream. I've been able to do it since I was little," Richard explains. "I taught myself how to do it and it's my most precious thing. Through the years, I've done everything that you can do, including talking and shagging with anyone you feel that takes your fancy. The only thing I haven't done is tried to kill myself. That's a bit shady. You probably wouldn't wake up, and you wouldn't know if it had worked, anyway. Or maybe you would.

"I often throw myself off skyscrapers or cliffs and zoom off right at the last minute That's quite good fun. It's well realistic. Eating food is quite smart. Like tasting food. Smells as well. I make foods up and sometimes they don't taste of anything—like they taste of some weird mish-mash of other things."


"About a year and a half ago," he says, "I badly wanted to dream tracks. Like imagine I'm in the studio and write a track in my sleep, wake up and then write it in the real world with real instruments. I couldn't do it at first. The main problem was just remembering it. Melodies were easy to remember. I'd go to sleep in my studio. I'd go to sleep for ten minutes and write three tracks - only small segments, not l00 percent finished tracks. I'd wake up and I'd only been asleep for ten minutes. That's quite mental.

"I vary the way I do it, dreaming either I'm in my studio, entirely the way it is, or all kinds of variations. The hardest thing is getting the sounds the same. It's never the same. It doesn't really come close to it. When you have a nightmare or a weird dream, you wake up and tell someone about it and it sounds really shit. It's the same for sounds, roughly. When I imagine sounds, they are in dream form. As you get better at doing it, you can get closer and closer to the actual sounds. But that's only 70 percent of it."

I vaguely knew that Richard should be a, ahem, trickster in interviews, but now I have a piece of evidence: hitting REM sleep in ten minutes? Yeah right.

> Admittedly, the album has an over-arching tone of eeriness throughout, which isn't something I want to listen to while I work most days.

I think it may have to do with me being ADHD, but I actually like ambient music with a little bit of a stressful edge to it. It gives me a little bit of urgency and controlled stress seems to be my best motivator.

If you guys enjoy generative music, you should try Sunvox.

The author has created a js library which you can use to play .sunvox files in it, it is pretty nice too.


try 'machine 005'. I've been using it for coding lately.

Sunvox has a js lib now?!?!? YAAAAS

Yeah! Webassembly blob and you use JS to operate it.

"Selected Ambient Works 85-92" is a quality product.

Green Calx, while an awesome track, always breaks my concentration. I have made many playlists named SAW1-GC on different platforms and have gone so far as considering using a utility knife to cut a custom diagonal skip groove through it on one of my three vinyl copies of SAW85-92.

Probably my favorite album for programming and other work that require concentration.

I still haven't encountered anything like it. I wish there was more.

One path to explore is to start with interesting covers of Aphex Twin works:

- The Alarm Will Sound collective has an album called Acoustica where they made acoustic-instrument-based covers of Aphex Twin songs. https://www.alarmwillsound.com/

- The Bad Plus covered Aphex Twin's Flim on their These Are the Vistas album. https://www.youtube.com/watch?v=HeMre0Sp7o4

Both Alarm Will Sound and The Bad Plus are somewhat different, quite interesting directions to explore for other music you might enjoy programming to.

Gerald Donald's work (alias Dopplereffekt, Arpanet, Japanese Telecom, was part of Drexciya) works well for me for programming for years. Especially the Album "Inertial Frame" from Arpanet. Calabi Yau Space from Dopplereffekt as well (which was released on Aphex Twin's own label Rephlex)

For those in San Francisco: he will be playing at the MUTEK festival in May.

Thanks for reminding me about inertial frame

Aphex Twin (anonymously) put some previously unreleased material on SoundCloud a few years back that is from that era.

Another recommendation: Boards of Canada - Tomorrow's Harvest

BOC’s The Campfire Headphase changed my life.

Warp Records (record label that Aphex Twin and Boards of Canada are a part of) has built a really solid catalogue over the years

I agree now with BOC purists that it's not their best record, but "Dayvan Cowboy" came at such a tectonic moment of my life I wouldn't decisively argue it didn't start the earthquake.

Plaid is another cool artist on Warp Records.

Oh, yes, i forgot Plaid. I use to listen Plaid along with Amon Tobin back in early 2000's

Check out the artist Solar Fields

The Mirror's Edge soundtrack got me into Solar Fields, and it's remained in my programming rotation since it came out, along with that of Mirror's Edge: Catalyst.

Example from Catalyst - https://www.youtube.com/watch?v=2fb5_zVk2gY&t=1h46m48s

Oh nice, just realized Solar Fields did both the Mirror's Edge games' soundtracks

I love Solar Fields.

Also: Global Communication, Ocouer, Christopher Willits, Marconi Union, Eluvium, Ólafur Arnalds, Balmorhea.

Can’t forget Keith Kenniffs’s projects, Helios and Goldmund.

Besides the recommendations already done I can recommend Field Rotation: https://fieldrotation.bandcamp.com/

Some favorites:



You should give a listen to the early Autechre or Speedy J. Or anything Warp put out in the 90s.

Worth to mention Boards of Canada and B12 from the Warp catalog as well. Warp has been my go-to music label since early 2000's.

LFO are also top draw stuff

Highly, highly, highly recommend Autechre's Incunabula, for people who are into Selected Ambient Works.

Yes, if you're listening to it just as background music, early Warp catalog would work.

You should check out: - Autechre - Square Pusher - Bonobo - Wisp - Brian Eno

In that order. All solid. If you want more you should checkout the Rephlex records current and previous artists list. All really solid ambient focused electronic music.

Wisp has done some nice remixes of Aphex Twin songs: http://wisp.kaen.org/audio/saw2reworked.zip

(via: http://www.wisp.kaen.org/ )

Bibio’s Phantom Brickworks do remind me a bit of stone in focus. I think Capel Celyn is an especially beautiful piece, evenmore so if accompanied by the music clip.

Try some albums of the following artists: Biosphere, Boards of Canada, Forest Swords, Kink Gong, Johann Johannsson, Park Jiha. Hope you enjoy it!

Check out the artist "Actress"

Thanks for writing this and spending the time to break down and explain how you did it.

I’m not the biggest apex twin fan, but I’ve followed him for a few decades and always liked his visual and audio tricks mixed into his music. I feel like he would enjoy the idea of an infinite track and hope he responds somehow.

Are there any file formats that allow generative music so I can download this and play in a non-internet connected situation?

generative.fm is a progressive web app, so it _should_ work offline with the caveat that you'll need to play a piece once online before it works offline. I'm working on getting this communicated through the site but just haven't gotten around to it yet.

Cool. Awesome work, thanks for sharing :)

In knowing the original track very well, I find this version fascinating but also frustrating to listen to. It really emphasizes for me the importance of well thought out note placement and timing and the connection throughout, especially in minimalist pieces. This might sound pompous, but it's like the original seeks to tell a story, but this version is like pulling words and phrases out of a hat; the emotional payoff doesn't exist for me. I hope that's not discouraging in any way, as I think this type of experimentation should be celebrated and the writeup is excellent. I'm only speaking of what effect this has on me tied to this particular piece of music.

I know what you mean and I don't find it discouraging at all. Your analogy is perfect. The way the original piece evolves and builds on its phrases over time is very noticeably lacking in the generative version. In my version, any emotional buildup from one phrase to the next can only happen in short spurts at best, and it happens completely by accident. Of course, if I was putting on music that I wanted to really focus on and enjoy, the original would always be my choice. For me, the appeal of an endless version was that I could turn the music into ignorable ambience for my environment. Since we're making analogies, to me it's a bit like seeing a painting you like and saying "Gosh I like that color," then painting your walls that color. It can't compare to the painting but it might remind you of it.

Came here to say this. The generative version has no story, so the emotional aspect is quite watered down, even if the general mood persists. Funnily enough, that is probably perfect for the use case of this kind of thing: background music.

Given that the way the phrases evolve, and also repeat, is what tells the story, maybe a second layer of markov chain driving the phrase choices would help?

The story can only come from understanding the piece. The understanding requires a deeper knowledge. Interesting that it lacks 'emotional intelligence' - is that something which can also be learnt?

Doom from 2016 had this really cool feature in it where the music was procedurally adaptive based on the context of the game. When you were getting into a battle the music would mutate and become a lot more aggressive. Small fights would be different from big hairy fights. Ending a fight on low health would be different from leaving it unscathed.

For long time I've always thought it would be great to have something that had a similar effect based on text input speed.

Situations like getting into a massive battle, or leaving a battle with low health would have the music mutate to

The DOOM: Behind the Music (https://www.youtube.com/watch?v=U4FNBMZsqrY) talk is amazing if you like DOOM 2016's music.

I believe the first mainstream game with adaptive music like that was Half Life 2 in 2004.

Mario 64 did it in 1996. For example that early level with the shipwreck, the music morphs as you go into the water and dive around the wreck. If you go out of the water again the music morphs back to the original form.

I'm not sure that that counts. My recollection of that game was that it's a binary choice of being underwater/at the surface, whereas the music in HL2 in a given spot would change depending on what the enemies were doing, how fast you were going, whether or not you were driving, if you found ammo for the rocket launcher, etc.

Lots of N64 games did this, especially the Rare ones — Banjo-Kazooie had a fair bit of music dynamism, Conker's had even more. It's pretty easy to do, and game devs figured it out pretty early on.

I assume what Half-Life 2 did was more advanced, but there was iMUSE way before that:


Seems to go more to the direction what Autechre is doing nowadays. I'd recommend to have a listen to the NTS Sessions if you feel adventurous...

I'm glad you mentioned them. Autechre produced my favourite hacking background music. I agree, people interested in the generative music should definitely give NTS Sessions a listen. But for the newcomers I'd suggest to start with their early IDM albums(Incunabula, Amber) as NTS Sessions may sound too "glitchy".

I think nobody else goes where they are going right now. Very forward looking and absolutely amazing listening when you "get it". Elseq, NTS Sessions and now these 40+ hours of live archives released. There is an algorithm that just does things to sound. Sometimes it's just noise that turns into the most beautiful thing in the world.

NTS has become my favorite radio. A great place to discover original music.

Autechre are my Phish.

Their live shows are these kaleidoscopic tours through the sounds (but not songs) of their various albums, recognizable only in passing, in fragments. Their albums are each distinct, but immediately recognizable. Most people can't stomach their music, but those who can swear by it. They aren't afraid to stretch out and take an hour+ for a single song.

"Autechre are my Phish."

Haha, so accurate. I don't usually play Autechre when my roommates are home.

Generative music comes full circle. This side of Aphex Twin is more or less directly inspired by Brian Eno's Music for Aiports. Which itself comes out of Eno's generative music experiments.

Are you sure it was so insprired? Early IDM artists have sometimes admitted never hearing their forebears (Stockhausen, Steve Reich, etc.); they just reached similar concerns independently by noodling with electronics.

Aphex Twin actually listened to Stockhausen (and vice versa): http://www.synthtopia.com/content/2010/10/15/karlheinz-stock...

and they probably met https://www.reddit.com/r/aphextwin/comments/8vyrbf/aphex_twi...

Richard D. James only heard Stockhausen for the first time after the press had been claiming his early works were influenced by Stockhausen. It turns out they weren't.

I don't think it's possible to make electronic music and not be influenced by Stockhausen. If not directly, then indirectly through diffusion.

By the 1980s most Stockhausen recordings were out of print (as Stockhausen had bought the rights back from Deutsche Grammophon), and Stockhausen was no longer traveling widely to promote his works and he had become rather reclusive as he focused on writing the LICHT operas. So, there was limited opportunity to hear his work and his influence on electronic musicians of that decade is overstated. The advent of drum machines and then PCs that, as people discovered, could be modified to produce unusual sounds is really what sparked electronic experimentation for most producers of that generation.

And you could argue Eno's proceduralism was influenced by Cage, and Cage's proceduralism from Nancarrow... and on and on the family graph of influence goes. Not sure where the full circle is, but there's plenty of thread to tug on here.

Although I like the result, I find the track less soothing without the birds in the background. There's probably a way to generate those as well though, I suppose.

Agreed. I considered adding birds to it but I've found if I play this track in the morning and open my window so I can hear actual birds it's a nice experience.

I'm listening at 7AM and can confirm it mixes well with the real-life birds outside.

Sadly these office windows that surround me don't open up, and so I'm left to listen to your awesome work paired with the hum of AC blowers and fluorescent light ballast.

Ha, talk about analog!

Went on youtube and found a "10 hour" bird and forest noises video, works pretty well

Reminds me of the Infinite Jukebox: http://infinitejukebox.playlistmachinery.com/

This is very cool. My personal favorite Aphex track is Donkey Rhubarb if you're feeling ambitious.

Oh wow... an endless generation of D.R. would send you straight to the asylum! as much as I love the bonkers-ness of Donkey Rhubarb, it's existing length is quite enough for one sitting!

<EVIL> Come to daddy on the other hand... </EVIL>

One day, someone will feed the Come to Daddy video into a machine learning cluster somewhere and turn it into a 12 hour video that could give Charlie Brooker nightmares.

Windowlicker would be similarly nightmarish


You are an evil genius

There are a number of languages / environments to help with the creation of generative music under the umbrella term 'live coding' - the community also tries to keep the performance aspects of music intact.

Lots of great starting points at: https://github.com/toplap/awesome-livecoding

As if aisatsana was not beautiful enough, it was first performed in London's Barbican Centre in 2012, on a suspended grand piano, swinging in the air !


I really love Aphex's piano tracks. So intimate that you can hear the pedal shifting in the background.

Pretty sure most of those tracks are not actually playable by a single person and that they're all programmed.

Aphex Twin uses a Disklavier for these piano pieces. https://en.wikipedia.org/wiki/Disklavier

Also do some research into "prepared piano" to understand how some of the timbres are achieved. Pretty sure there is some "preparation" done to the hammers/strings on the Disklavier in these songs.

Many of the pieces are playable, like Avril 14th. There are a number of people [0] playing it on YouTube and I've been able to knock out a rendition as well.

[0]: My favorite - https://www.youtube.com/watch?v=97FBWB4vv3s

That's pretty good but I don't think the higher octave stuff near the end is spot on. This is not playable: https://www.youtube.com/watch?v=3uhTwxqE4Co

I've used my nose to hit individual notes in the middle of the keyboard while using my hands at the extreme ends.

I didn't listen carefully to the whole track, but I think you could handle what I did hear that way. Not certain that would work here, just thought I'd point out that there are options beyond your fingers for pushing keys.

J. S. Bach is reputed to have held a stick in his mouth to have an additional note on tap - much more practical and playable than my stupid nose trick. If you used a forked stick or a crafted tool you could easily get more than one note, for that matter.

http://www.storycompositions.com/2008/07/rare-stories-about-... (see section "His Music Is Terrible")

(I've also used my face to move modwheels while holding dense chords with both hands, but that's not really relevant to piano technique.)

Agreed, by playable I was meaning by two hands.

Ever since druqs Aphex Twin’s albums have contained both fast-paced techno, and quiet little solo piano pieces that sound quite well within the range of what one human can play.

How do you feel about Alberto Balsam? :)

Alberto has a much more 'traditional' song structure, with a specific melody/harmonies, and well defined sections. I feel like this Markov Chain process is best suited for more loosely structured ambient tracks. That said, I'd be curious to see the results!

I'd actually pay for it.

Yes. My favorite Aphex Twin track.

Relatedly, would strongly recommend the Booka Shade DJ-Kicks album - Alberto Balsam is one of the tracks they selected https://www.youtube.com/watch?v=onLPjryBtns

Generative.fm was shared here last week, spawning a number of interesting discussions. If you enjoy the comments here, I suggest checking out that thread.


I find that perfectly quantized tracks sound artificial over time. Not sure how close “aisatsana” sticks to perfectly on-(half)-beat, but adding slightly random quantization offsets to each generated note could be interesting to lend the results a more "natural" sound

Nice! Simple, but works out super nicely :) Even the occasional odd note doesn't seem too out of place, since the phrases are short and "self-contained".

I also enjoyed the rest of the pieces on generative.fm, they all have a specific "character" which is quite rare. Nice work!

It's a beautiful piece and you've done it justice OP. Just wanted to say that.

Just pop it into paul stretch and convert it into an hour long masterpiece. https://www.youtube.com/watch?v=1saQ7KLbDUY

PaulStretch is amazing!

I recommend it to everyone trying to meddle with music, specially ambient/drone.


This seems very similar to what the Infinite Jukebox does. Granted, it uses the straight audio, so it's more well suited to pop music. But here's the link for this track: http://infinitejukebox.playlistmachinery.com/?trid=TRIMVDB14...

Not generative, but of interest if you like that: http://musicforprogramming.net/

This is excellent. Thank you so much for sharing (and it's certainly a track that I've also wished was longer!) I'd be interested to see if this approach can be implemented within a DAW? This would allow the notes to be played and then treated with FX, EQ and mastering (or maybe just some sound design to get it sounding even closer to the original)? At a push, one could run the output of the browser through the DAW I suppose :)

Similar to what another commenter mentioned, I've done some experiments creating a virtual MIDI port from my code where I push all the notes to. A DAW can then read from this like any other MIDI device, as if it's just a MIDI keyboard that someone is playing. Another approach is to generate a MIDI file of some specified length and load that in the DAW, which is nice as it doesn't need to be generated and recorded in real time.

Sure - even the built in midi effects are enough to do a lot of generative stuff in Ableton, and when you get into Max the sky is pretty much the limit.

But, you could pretty much feed any source of generated midi into a DAW in real time in multiple channels and then have effects on different channels, etc.

True! Also, if you are not able to afford Max/MSP, you can do basically the same stuff (although with a clunkier UI) with Puredata, which is a live coding 'graphical language' setup just like Max . In fact it is (or was idk) developed by the same author, Miller Puckette.

On Windows, you can use a package like loopMidi to create a virtual midi port which you can use to output the live generated midi data to any daw.

I haven't read through the article but people have been doing generative tracks in Reaktor for years

Reminds me of this Bandcamp user makes some really cool Sci-Fi ambient loops, this one is a Blade Runner ambient loop: https://cheesynirvosa.bandcamp.com/track/blade-runner-ambien...

I think something like this would be awesome in generative.

As an aside, if you like the track discussed in the article, check out Brian Eno's ambient work [1].

This track is a very obvious homage to Eno (and that's great).

Coincidentally, Eno and Richard James are both into generative music and I have little doubt they'll be taking a peek at this article.

1. https://www.youtube.com/watch?v=0TSJbT_NWUY

I tend to listen to ambient music. There's radio show on CBC radio 2 -- After Dark, that plays tunes with not too many lyrics.

CBC Radio 2 used to (or might still?) have a similar program called The Signal that was really good. And before it, there was a program called Brave New Waves that was probably my favourite late night show in existence. Really sad it doesn't exist anymore.

After dark is the contiuation of the signal -- same music but different host.

I have archive of the last 3 years of the signal in mp3, it's about 90GB.

Send me a message at gimmespam at flamy.ca and I'll send you a link.

One tactic to roll your own is time-stretching. A useful and fun algo (since tamed in 1999) is the phase vocoder. Especially nice for instruments with rich spectrums.

Example work (1986) Wishart's Vox 5. https://www.youtube.com/watch?v=y23kobWHs8M

Great article. Is an extended edition of aisatsana available for download anywhere, or is it only available through generative.fm?

How would that work? The idea is it never ends, so how do you download it? It creates a new song each time.

Well, I would be willing to settle for a mix of 1-2 hours. :)

You can use audacity to record audio coming out of your computer - just change "MME" to "Windows WASAPI" next to the input/output device settings.

Then, play audio from generative.fm and record it for 1-2 hours. You can export to MP3 or WAV when done.

I did something similar (but with OBS) in this video where I remixed music from the game Mirror's Edge Catalyst using Sonic Pi - https://www.youtube.com/watch?v=gQ8dD5Bz3_E

Personally, I like the generative aspect.

This piece reminds me of Harold Budd's album "The Pearl" (1984)! Which is 42 minutes long - so another way to approach the same problem :) https://open.spotify.com/album/5SSf6lNbSoaAUx6PxQVjlP

For the last 11 years I’ve been listening to autechre exclusively. Maybe that’s weird, but it’s all the sounds I need :)

How did this come about? At some point you realized you were listening to mostly Autechre, and made a conscious decision to cut everything else out? (I’m listening to LP5 now, somehow it has always been my favorite)

yes, exactly!

As an Aphex Twin fan and someone very interested in procedurally generated music, this is absolutely brilliant.

I always think a bit of drum and bass will start playing at any moment listening to this[0]. Aphex Twin's music is so random at times

[0] https://www.youtube.com/watch?v=3_MRe3JwFc8

What is a good way to go about working with a new JS library? I only have used Python, Java, and C#. All the tutorials I find on tone.js just include the js code but not information on how to link the library.

I would like to play around but getting the dev environment has stumped me.

there are many code playground sites[0] that could make the process less painful for you. essentially you will need to reference the library. you can do it locally, or reference the publicly available version (most libraries have a CDN hosted version).

you could also google for the playground site + library, because there may already be a playground setup (a project with a reference to the library already set) for that library somewhere. i.e.

[0] https://www.sitepoint.com/7-code-playgrounds/

How do I tell what language to use with this package?

From what I can tell it uses react. I have tried node, angular, and react but this installation page confounds me [0]. as far as I understand, I have been using either npm init, ng init, or create-react-app to initialize the directory for an example project. Then I do npm install tone inside of the directory I created.

I have found this [1] playground for tone but it does not elucidate how the library should be or is referenced.

I'd like to work with generative music but the amount I must know and choose between in a js project always seems to freeze me at the project init phase.

In the process of editing this comment I have finally gotten tone.js to work. Here are the steps i followed: npm install create-react-app create-react-app tone-test cd tone-test npm install tone add "import Tone from '../node_modules/tone'" to the top of App.js this step is what I was messing up previously i believe then I just throw tone commands at the bottom of App.js and they play on pageload, this is exactly what I wanted for now.

[0] https://github.com/Tonejs/Tone.js/wiki/Installation [1] https://codepen.io/loderunnr/pen/AXoAko

its gonna be tricky to learn this way.. i'd recommend getting the basics down first.

i can recommend frontendmasters.com Brian Holt intro to web development. he's great .. i learned React from his courses :)

Sonic PI is an excellent programming tool that lets you do this sort of thong with relative ease.

That's so awesome. It'd be nice to have the bird noises in the background as well though I guess this would be tricky if it's using MIDI to play the music.

I guess I could find some ambient bird noise track and play it alongside...

MIDI (only controls sounds, does not generate them) is widely used to trigger samples, so slip some samples of bird noises into your mix and trigger away.

It's not using midi, and yes, bird noises would be a perfect addition and totally easy to do with the Web Audio API. Great idea.

This is seriously impressive and very pleasant to listen to.

I'd love to see what Richard has to say about this.

> Aphex Twin (aka Richard James) is known for creating original, complex sounds whenever he can, but his next creation might just take the cake. He tells Groove that he hired a programmer to develop music software based on mutation. Once you give the app an audio sample, it automatically generates six variants on that and asks you to pick your favorite before going on to create more variations -- think of it as natural selection for sweet beats. The software still "needs to be tweeked," and there's no mention of a public launch, but the early output reportedly sounds "totally awesome." Don't be shocked if one of James' post-Syro albums uses this software to create some truly one-of-a-kind tunes.

Source: https://www.engadget.com/2014/12/29/aphex-twin-mutation-musi...

I can't speak for him, but personally if I was him I'd say 'finally! I've only had these algorithms printed on my records for ~10-20 years'

But, realistically, that'd be like in the Hitch Hiker's Guide to the Galaxy when they said 'but the notice was in Alpha Centauri the whole time! How did you not know?'

Great writeup, generative.fm is awesome, I'll definitely be using that when I need to focus and nothing in my music library appeals to me.

Huh. Was bracing for something like "come to daddy" when I hit play. Didn't realize Aphex Twin did anything else.

You may have heard this behind Kanye, but it's another beautiful piano piece from Richard D James. Avril 14th from Drukqs: https://www.youtube.com/watch?v=MBFXJw7n-fU

He's basically the father of modern claustrophobic ambient with his Selected Ambient Works pt2. A very special thing to fall asleep and wake up while it's still playing.

Diverged from Brian Eno's more warm and happy tunes to something darker.

Agreed on the ambient classification but I always think of him as much more than that as far as influence goes. He's basically had an influence on a huge portion of electronic and non-electronic music over the past few decades.

Yeah, it's great stuff so far. I wish I'd known that years ago. Only thing that ever got air play was the other side.

And Come to Daddy was a joke against the current mainstream, that was The Prodigy in those days. Although most of his stuff has been some kind of a joke always, even though its usually brilliant stuff.

Aphex Twin wrote one of my favorite songs of all time. Quiet. Melancholy. Absolutely gorgeous. Not the least bit noisy.


Mentally I have a third group for his music, in addition to the two you mentioned, which is full of fast paced melodic music like 'xtal' that doesn't have any of the harsh beats of his music like Come to Daddy.

Pairs well with https://www.birdsong.fm/ :D

I always liked Burial for programming.

Something about the sound of a neglected piano. Detuned, buzzing; it's played as is.

Actually, Aphex Twin and John Cage are both famous for using a technique called 'prepared piano', in which the artist either intentionally uses or intentionally destroys a piano, e.g. by shoving objects into the strings within the body to create an intentionally detuned or broken sound.


Terry Adams (of NRBQ fame) also has a fascinating prepared piano album, if you want to explore other works. http://www.nrbq.com/store/cd-Andromeda.html

Aphex Twin as a top thread... this is why I love hacker news.

Is there a plan for an iOS app anytime soon?

Muzak missed a trick here

This is great - thank you

generative.fm is my goto when I need to focus.

https://brain.fm is cool too

Very cool, thanks.

Has this guy never heard of a bar or measure? Reading through the first section where he describes the song as sections of 16 beats makes me cringe, and this is coming from someone admittedly horrible with music theory.

Edit: I was turned off by his constant usage of the word “beats”, not phrase.

Thanks for the feedback. I figured people who understood music would feel this way, but I deliberately chose to omit as many musical words as possible so that readers who weren't familiar with them wouldn't get lost. I would have felt obligated to define "bar" and "measure" if I used them, and ultimately I decided they weren't necessary. Sure, I could have defined what a measure was, that there are four beats in a measure (in this particular case), that each phrase is four measures instead of 16 beats, and what time signatures and quarter notes and eighth notes are. However, I believe all of this would have bloated the article and alienated readers who aren't as familiar with music theory as you and I are. As is, the article is still perfectly readable to someone who does have an understanding of music theory, and all we have to put up with is some non-traditional terms like "half-beat."

Phrase is actually the appropriate term. A bar would be 4 beats (for this song, which is in 4/4). The main phrase of the melody is 4 bars long.


Good point. I miswrote the original comment, anyways, I meant beats.

I thought a bar is 4 beats, a measure is 4 bars (in 4/4)? Going to have to look this up but is measure == phrase == section? Or I guess, a section can be arbitrary number of bars depending on song structure.

There's a fair amount of looseness in how these terms are used, in practice, depending on the context.

4/4 means 4 beats per bar, and we'll use a quarter note to represent a beat when writing notation. Another example would be 7/8, which is 7 beats per bar, and we'll use an eighth note for the beat when writing notation.

Looking at music in terms of bars or measures only really matters when creating traditional western sheet music notation. If we're writing computer code, or playing by ear, or looking at other kinds of music notation, those terms like "bar" and "measure" become less meaningful, and other terms become more useful or appropriate for describing the structure of the music.

bar == measure

Actually, a phrase is a common term in music theory.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact