And if you try Businessfunk and decide you have the stomach for it, I recommend following up with the Komputer Cast mixes by Com Truise.
Notably, Businessfunk mixes are made of ‘library music,’ i.e. stock musical clipart. Gotta say, I've lately heard a bunch of rather good music from libraries, e.g. the entire soundtrack of ‘IASIP’ (though that one is from “production music” libraries, i.e. more specialized for film, television and the like).
These three bands work well for me when coding (or just "space ambient" on youtube):
- AES Dana: https://ultimae.bandcamp.com/album/perimeters
- Carbon Based Life Forms https://carbonbasedlifeforms.bandcamp.com/album/hydroponic-g...
- Solar Fields: https://solarfields.bandcamp.com/album/movements-remastered
Many public radio stations carry it on Sunday nights, even the “traditional” format ones that are mostly classical music and news. It was my first introduction to ambient music, and the host’s voice gives me a mix of nostalgia and panic (as it meant I needed to be getting on with my homework).
For me I'd say Tycho is a bit more my bag when it comes to a tippy-tappy-loads-of-code session.
I was already intimately familiar with many of the artists featured (Aphex twin, oneohtrix point never, tim hecker) but have definitely discovered some new ones through it too.
Eno's "Music for Airports" famously uses a system of multiple tape loops that produce sequences of different periods. As you listen, you can hear phrases that occur nearly together and then later, well-separated in time, as the periods of these loops go in and out of phase:
"One of the notes repeats every 23 1/2 seconds. It is in fact a long loop running around a series of tubular aluminum chairs in Conny Plank's studio. The next lowest loop repeats every 25 7/8 seconds or something like that. The third one every 29 15/16 seconds or something. What I mean is they all repeat in cycles that are called incommensurable — they are not likely to come back into sync again."
This interaction of periods will have long memory. (If the tape lengths are L and L+d, for small d, then the repeat time could easily be as long as L*(L/d), and even longer if d does not divide L evenly.) Thus, it is very different from what you get with a Markov chain.
Q: If I could give you a black box that could do anything, what would you have it do?
A: I would love to have a box onto which I could offload choice making. A thing that makes choices about its outputs, and says to itself, This is a good output, reinforce that, or replay it, or feed it back in. I would love to have this machine stand for me. I could program this box to be my particular taste and interest in things.
Q: Why do you want to do that? You have you.
A: Yes, I have me. But I want to be able to sell systems for making my music as well as selling pieces of music. In the future, you won't buy artists' works; you'll buy software that makes original pieces of "their" works, or that recreates their way of looking at things. You could buy a Shostakovich box, or you could buy a Brahms box. You might want some Shostakovich slow-movement-like music to be generated. So then you use that box. Or you could buy a Brian Eno box. So then I would need to put in this box a device that represents my taste for choosing pieces.
Enough of these passes and detailed removal of elements you wouldn't create, and you should get closer and closer to a machine that would be your musical clone.
I'm sure a musician who was also a programmer who understood their own tastes and creation process well enough could currently create something like this for a specific genre, but I think we're very far from a one-size-fits-all generator.
I remember a few years back there was some quick guide to visual design that I saw that recommended this. To provide what appeared to be randomness over repeating samples (or at least prevent easy pattern matching in the brain which is distracting), take three images that can of different prime number lengths, then repeat each one and put them next to each other.
I believe the example used was the ruffles in a stage curtain, where there were a few layers. Each layer was a repeating image of length 3, then length 5, then length 7 (but the image itself had some variations between ruffles within it). You won't get a point where all the images all stop and start at the same point (a dead giveaway of the pattern) until length 105.
I prefer to just write a bunch of riffs and see where they grow organically, but it seems the chain would be a great tool to piece some of those ideas together or give an indicator of where they could go when a creative block is hit.
I mean, most music is terribly dull and meaningless in the grand picture of things. The selection of input would be more like the job of a DJ. I don't know how much creativity can be fit into programming the chain, what features to pickup. It would probably just be a part of a bigger workflow, starting small, creating short samples, arranging those, and cetera.
For the story it helps that Aphex Twin is rather random to begin with (e.g. having created a song by reverse fourier syntheses over a gray scale picture, creating the sound to a desired spectrogramm). The irony is appreciable, though comendable if someone likes the result.
Didn't want to explicitly promote it here in the thread, but I'm building it to serve as a general music-content engine for artists (also for my own music, let's be honest) that will let you upload composition parts and the player will put them together on the fly. It already has interactivity built in where you can serve a different version of a track based on changing inputs you connect from your Internet presence etc. Plus, you can DJ with it, it supports smoothly changing playback speeds. It needs to happen.
Some video games do this automatically with their in-game music, but not all, and many streamers prefer using other music to avoid it getting stale.
Are Markov chains not considered ML/AI? I would consider them to be a standard part of the field
I feel like that album has a lot of potential for generative experiments. Admittedly, the album has an over-arching tone of eeriness throughout, which isn't something I want to listen to while I work most days. Maybe it would be inspiring to game developers working in the horror genre :)
If you guys haven't played it, seriously give it a shot. I believe there is a re-released version on XBOX and PC, though I can't speak to the quality of the ports, as I still play my Dreamcast almost daily. ;)
This page seems to say that his music wasn't used in the released version. https://www.unseen64.net/2008/04/10/k-project-rez-prototype/
Strange how memory affects us.
EDIT: And whom lucky enough to own one does not remember the Dreamcast fondly? ;) Crazy Taxi, Quake III, Skies of Arcadia, Ecco, THPS2 - I'd argue the ratio of quality software to shit software was literally almost 1:1, compared to, for instance, the PS2, where you might get 1 great game for every 10.
I just pulled my dreamcast out to play some of the games with my son. It's still a great system. There are some low quality games, but it definitely feels like you have to seek them out; as opposed to the shovel ware feeling I get on the PS1/PS2. Looking at the released game list though, I feel like some of these titles were probably good at the time, but won't hold up; but many of them are still as good as they were. VGA output is pretty nice too.
Tetsuya Mizuguchi, the producer of Rez, also worked on Tetris Effect which has an amazing soundtrack. Tetris Effect feels like a weird cousin of Rez btw!
Oh, fuck off. That's amazing. Colour me almost wanting to grab one. Wish it had a Vive port.
I'm not a fan of Tetris. I like games that have a very distinct 'beginning and end of the level' feeling.
Eh, lots of love lower in the thread. Have this instead...
Also there is a "missing" 19 track you can preview on his site: https://aphextwin.warp.net/release/68148-aphex-twin-selected....
Interview with David Toop, March 1994, The Face
Broaching this subject of dreams, he becomes animated and talks a long streak. "This album is really specific," he says, "because 70 percent of it is done from lucid dreaming... To have lucid dreams is to be conscious of being in a dream state, even to be capable of directing the action while still in a dream. I've been able to do it since I was little," Richard explains. "I taught myself how to do it and it's my most precious thing. Through the years, I've done everything that you can do, including talking and shagging with anyone you feel that takes your fancy. The only thing I haven't done is tried to kill myself. That's a bit shady. You probably wouldn't wake up, and you wouldn't know if it had worked, anyway. Or maybe you would.
"I often throw myself off skyscrapers or cliffs and zoom off right at the last minute That's quite good fun. It's well realistic. Eating food is quite smart. Like tasting food. Smells as well. I make foods up and sometimes they don't taste of anything—like they taste of some weird mish-mash of other things."
"About a year and a half ago," he says, "I badly wanted to dream tracks. Like imagine I'm in the studio and write a track in my sleep, wake up and then write it in the real world with real instruments. I couldn't do it at first. The main problem was just remembering it. Melodies were easy to remember. I'd go to sleep in my studio. I'd go to sleep for ten minutes and write three tracks - only small segments, not l00 percent finished tracks. I'd wake up and I'd only been asleep for ten minutes. That's quite mental.
"I vary the way I do it, dreaming either I'm in my studio, entirely the way it is, or all kinds of variations. The hardest thing is getting the sounds the same. It's never the same. It doesn't really come close to it. When you have a nightmare or a weird dream, you wake up and tell someone about it and it sounds really shit. It's the same for sounds, roughly. When I imagine sounds, they are in dream form. As you get better at doing it, you can get closer and closer to the actual sounds. But that's only 70 percent of it."
I think it may have to do with me being ADHD, but I actually like ambient music with a little bit of a stressful edge to it. It gives me a little bit of urgency and controlled stress seems to be my best motivator.
The author has created a js library which you can use to play .sunvox files in it, it is pretty nice too.
try 'machine 005'. I've been using it for coding lately.
I still haven't encountered anything like it. I wish there was more.
- The Alarm Will Sound collective has an album called Acoustica where they made acoustic-instrument-based covers of Aphex Twin songs. https://www.alarmwillsound.com/
- The Bad Plus covered Aphex Twin's Flim on their These Are the Vistas album. https://www.youtube.com/watch?v=HeMre0Sp7o4
Both Alarm Will Sound and The Bad Plus are somewhat different, quite interesting directions to explore for other music you might enjoy programming to.
Warp Records (record label that Aphex Twin and Boards of Canada are a part of) has built a really solid catalogue over the years
Example from Catalyst - https://www.youtube.com/watch?v=2fb5_zVk2gY&t=1h46m48s
Also: Global Communication, Ocouer, Christopher Willits, Marconi Union, Eluvium, Ólafur Arnalds, Balmorhea.
In that order. All solid. If you want more you should checkout the Rephlex records current and previous artists list. All really solid ambient focused electronic music.
(via: http://www.wisp.kaen.org/ )
I’m not the biggest apex twin fan, but I’ve followed him for a few decades and always liked his visual and audio tricks mixed into his music. I feel like he would enjoy the idea of an infinite track and hope he responds somehow.
Are there any file formats that allow generative music so I can download this and play in a non-internet connected situation?
Given that the way the phrases evolve, and also repeat, is what tells the story, maybe a second layer of markov chain driving the phrase choices would help?
For long time I've always thought it would be great to have something that had a similar effect based on text input speed.
Situations like getting into a massive battle, or leaving a battle with low health would have the music mutate to
Their live shows are these kaleidoscopic tours through the sounds (but not songs) of their various albums, recognizable only in passing, in fragments. Their albums are each distinct, but immediately recognizable. Most people can't stomach their music, but those who can swear by it. They aren't afraid to stretch out and take an hour+ for a single song.
Haha, so accurate. I don't usually play Autechre when my roommates are home.
and they probably met
Come to daddy on the other hand...
You are an evil genius
Lots of great starting points at:
Also do some research into "prepared piano" to understand how some of the timbres are achieved. Pretty sure there is some "preparation" done to the hammers/strings on the Disklavier in these songs.
: My favorite - https://www.youtube.com/watch?v=97FBWB4vv3s
I didn't listen carefully to the whole track, but I think you could handle what I did hear that way. Not certain that would work here, just thought I'd point out that there are options beyond your fingers for pushing keys.
J. S. Bach is reputed to have held a stick in his mouth to have an additional note on tap - much more practical and playable than my stupid nose trick. If you used a forked stick or a crafted tool you could easily get more than one note, for that matter.
http://www.storycompositions.com/2008/07/rare-stories-about-... (see section "His Music Is Terrible")
(I've also used my face to move modwheels while holding dense chords with both hands, but that's not really relevant to piano technique.)
Relatedly, would strongly recommend the Booka Shade DJ-Kicks album - Alberto Balsam is one of the tracks they selected https://www.youtube.com/watch?v=onLPjryBtns
I also enjoyed the rest of the pieces on generative.fm, they all have a specific "character" which is quite rare. Nice work!
I recommend it to everyone trying to meddle with music, specially ambient/drone.
But, you could pretty much feed any source of generated midi into a DAW in real time in multiple channels and then have effects on different channels, etc.
On Windows, you can use a package like loopMidi to create a virtual midi port which you can use to output the live generated midi data to any daw.
I think something like this would be awesome in generative.
This track is a very obvious homage to Eno (and that's great).
Coincidentally, Eno and Richard James are both into generative music and I have little doubt they'll be taking a peek at this article.
I have archive of the last 3 years of the signal in mp3, it's about 90GB.
Send me a message at gimmespam at flamy.ca and I'll send you a link.
Example work (1986) Wishart's Vox 5. https://www.youtube.com/watch?v=y23kobWHs8M
Then, play audio from generative.fm and record it for 1-2 hours. You can export to MP3 or WAV when done.
I did something similar (but with OBS) in this video where I remixed music from the game Mirror's Edge Catalyst using Sonic Pi - https://www.youtube.com/watch?v=gQ8dD5Bz3_E
Personally, I like the generative aspect.
I would like to play around but getting the dev environment has stumped me.
you could also google for the playground site + library, because there may already be a playground setup (a project with a reference to the library already set) for that library somewhere. i.e.
From what I can tell it uses react. I have tried node, angular, and react but this installation page confounds me . as far as I understand, I have been using either npm init, ng init, or create-react-app to initialize the directory for an example project. Then I do npm install tone inside of the directory I created.
I have found this  playground for tone but it does not elucidate how the library should be or is referenced.
I'd like to work with generative music but the amount I must know and choose between in a js project always seems to freeze me at the project init phase.
In the process of editing this comment I have finally gotten tone.js to work. Here are the steps i followed:
npm install create-react-app
npm install tone
add "import Tone from '../node_modules/tone'" to the top of App.js this step is what I was messing up previously i believe
then I just throw tone commands at the bottom of App.js and they play on pageload, this is exactly what I wanted for now.
i can recommend frontendmasters.com Brian Holt intro to web development. he's great .. i learned React from his courses :)
I guess I could find some ambient bird noise track and play it alongside...
But, realistically, that'd be like in the Hitch Hiker's Guide to the Galaxy when they said 'but the notice was in Alpha Centauri the whole time! How did you not know?'
Diverged from Brian Eno's more warm and happy tunes to something darker.
Edit: I was turned off by his constant usage of the word “beats”, not phrase.
I thought a bar is 4 beats, a measure is 4 bars (in 4/4)? Going to have to look this up but is measure == phrase == section? Or I guess, a section can be arbitrary number of bars depending on song structure.
4/4 means 4 beats per bar, and we'll use a quarter note to represent a beat when writing notation. Another example would be 7/8, which is 7 beats per bar, and we'll use an eighth note for the beat when writing notation.
Looking at music in terms of bars or measures only really matters when creating traditional western sheet music notation. If we're writing computer code, or playing by ear, or looking at other kinds of music notation, those terms like "bar" and "measure" become less meaningful, and other terms become more useful or appropriate for describing the structure of the music.