There's a nice website with a clean UI called 'Music for programming', put together by a guy who curates playlists designed to compliment anyone who wants to buckle down and do deep focus creative work for a while. I've been through each of his 53 playlists several times. 'Asiatsana' is one track in one playlist, but the music overall is very much in keeping with the mood and tone of Asiatsana.
The curator of Music For Programming [0] (which is fantastic, yes!) also put together the very silly collection of mixes called Businessfunk [1], which are also great coding music.
And if you try Businessfunk and decide you have the stomach for it, I recommend following up with the Komputer Cast mixes by Com Truise.
Notably, Businessfunk mixes are made of ‘library music,’ i.e. stock musical clipart. Gotta say, I've lately heard a bunch of rather good music from libraries, e.g. the entire soundtrack of ‘IASIP’ (though that one is from “production music” libraries, i.e. more specialized for film, television and the like).
It’s a radio show, with an associated streaming service/record label.
Many public radio stations carry it on Sunday nights, even the “traditional” format ones that are mostly classical music and news. It was my first introduction to ambient music, and the host’s voice gives me a mix of nostalgia and panic (as it meant I needed to be getting on with my homework).
FSOL is great for washing the dishes to, but I find they can be a bit too dissonantly clicky for coding, reading etc. But don't get me wrong, they're one of my favourite outfits and used to play them all the time when I did radio.
For me I'd say Tycho is a bit more my bag when it comes to a tippy-tappy-loads-of-code session.
Such a well curated list. I listen to it all the time at work.
I was already intimately familiar with many of the artists featured (Aphex twin, oneohtrix point never, tim hecker) but have definitely discovered some new ones through it too.
To piggy-back off this, the record label Silent Season is full of great ambient/downtempo/spacey music for programming. Was listening to this today: https://shop.silentseason.com/album/the-waves
A lot of the music I love from my younger years, I can still listen to either actively for enjoyment or passively for concentration. It's entirely up to me.
For musicians and devs, it's a good hint that Markov chains are "good enough" for most interesting music applications, ML/AI doesn't typically yield better results. Happy to see this on the front page!
I don't disagree with this observation, but I wanted to mention a counterpoint.
Eno's "Music for Airports" famously uses a system of multiple tape loops that produce sequences of different periods. As you listen, you can hear phrases that occur nearly together and then later, well-separated in time, as the periods of these loops go in and out of phase:
"One of the notes repeats every 23 1/2 seconds. It is in fact a long loop running around a series of tubular aluminum chairs in Conny Plank's studio. The next lowest loop repeats every 25 7/8 seconds or something like that. The third one every 29 15/16 seconds or something. What I mean is they all repeat in cycles that are called incommensurable — they are not likely to come back into sync again."
This interaction of periods will have long memory. (If the tape lengths are L and L+d, for small d, then the repeat time could easily be as long as L*(L/d), and even longer if d does not divide L evenly.) Thus, it is very different from what you get with a Markov chain.
There was an interview with Eno in Wired in 1995 [1] that i still think about sometimes. Amongst other things, he talks about lifting himself to the next meta-level of composition, where he doesn't write music, he writes rules for a box which makes music:
Q: If I could give you a black box that could do anything, what would you have it do?
A: I would love to have a box onto which I could offload choice making. A thing that makes choices about its outputs, and says to itself, This is a good output, reinforce that, or replay it, or feed it back in. I would love to have this machine stand for me. I could program this box to be my particular taste and interest in things.
Q: Why do you want to do that? You have you.
A: Yes, I have me. But I want to be able to sell systems for making my music as well as selling pieces of music. In the future, you won't buy artists' works; you'll buy software that makes original pieces of "their" works, or that recreates their way of looking at things. You could buy a Shostakovich box, or you could buy a Brahms box. You might want some Shostakovich slow-movement-like music to be generated. So then you use that box. Or you could buy a Brian Eno box. So then I would need to put in this box a device that represents my taste for choosing pieces.
I spent a huge amount of time playing around with a demo of SSEYO Koan Pro which Eno used to make Generative Music 1 [1] and which evolved over time into Wotja by Intermorphic [2]. It's about time I tried Wotja out!
This should be doable? You could feed your entire catalogue into the machine, and have it start generating music. For every pass, you could listen to the created piece and tell it "I would not make anything like element X, because of [specific reason], so take that out."
Enough of these passes and detailed removal of elements you wouldn't create, and you should get closer and closer to a machine that would be your musical clone.
This is missing the complexity to a lot of music. Swap "music" in your post for "books" or "software" or "human interactions" and maybe the skipped complexity becomes obvious. What if the key ingredient to someone's music is a long over-arching structure throughout the song of highs and lows following the emotional progression of a predictable 3-act story? The software would need to be capable of recognizing and synthesizing and tuning that kind of high-level element and surely many others like that. Even songs that are made up of a few repeating tracks overlaid probably have high-level qualities like that the artist tweaks the setup until the end results meet the high-level goal they were aiming for.
I'm sure a musician who was also a programmer who understood their own tastes and creation process well enough could currently create something like this for a specific genre, but I think we're very far from a one-size-fits-all generator.
> What I mean is they all repeat in cycles that are called incommensurable — they are not likely to come back into sync again.
I remember a few years back there was some quick guide to visual design that I saw that recommended this. To provide what appeared to be randomness over repeating samples (or at least prevent easy pattern matching in the brain which is distracting), take three images that can of different prime number lengths, then repeat each one and put them next to each other.
I believe the example used was the ruffles in a stage curtain, where there were a few layers. Each layer was a repeating image of length 3, then length 5, then length 7 (but the image itself had some variations between ruffles within it). You won't get a point where all the images all stop and start at the same point (a dead giveaway of the pattern) until length 105.
I remember that article too! I used the same technique to make a convincing, non-duplicating flame-flicker algorithm for an led. 3 variables, all looping to different prime numbers and the sum of the 3 variables was the led intensity.
I actually figured there was a good chance someone here would reply with it, since I believe I originally found it here either from a submission or a comment. Thanks!
Hm, it works well for this particular song/the genre of generative music but it easily hits limits with many other types of music people might want to work with.
More advanced ML/AI techniques are needed to reason about sound in a semantic/latent space, which a stock Markov model will not do.
I feel like the Markov chain could also apply go higher level song formula, with probability weights for choruses, verses, bridges, breakdowns, solos, etc.
I prefer to just write a bunch of riffs and see where they grow organically, but it seems the chain would be a great tool to piece some of those ideas together or give an indicator of where they could go when a creative block is hit.
I think it should be more interesting to create a formular that sounds nice. In effect, that's what's done when creating structures "organically". Hence a lot of music is formulaic, but the problem with that is that the formular is predictable on purpose. It's not strictly simple, not everyone can do it freely, but the more complexity, the harder it is to stay within any given constraint. Vice versa, adding a lot of complexity just to satisfy a weird constrained, instead of relaxing the constraint can seem overloaded. The latter notion may be a meta constrained, so you get a dynamical system. The dynamics of a Markoc Chain are either limited or hard to control as far as I can tell, as its a random process. But the expressiveness is king. The way I imagine controling that would involve human interpretation in a trial and error loop, and therefore a lot of bad music. This, and the imitation of prior art is a given for unplugged music, too, of course, but the material to draw from is so diverse that a markov chain is more likely to pick up all the bad parts, too.
I mean, most music is terribly dull and meaningless in the grand picture of things. The selection of input would be more like the job of a DJ. I don't know how much creativity can be fit into programming the chain, what features to pickup. It would probably just be a part of a bigger workflow, starting small, creating short samples, arranging those, and cetera.
For the story it helps that Aphex Twin is rather random to begin with (e.g. having created a song by reverse fourier syntheses over a gray scale picture, creating the sound to a desired spectrogramm). The irony is appreciable, though comendable if someone likes the result.
Didn't want to explicitly promote it here in the thread, but I'm building it to serve as a general music-content engine for artists (also for my own music, let's be honest) that will let you upload composition parts and the player will put them together on the fly. It already has interactivity built in where you can serve a different version of a track based on changing inputs you connect from your Internet presence etc. Plus, you can DJ with it, it supports smoothly changing playback speeds. It needs to happen.
One use-case that immediately jumped to mind for an API plugin is integration with popular games for game streamers to use. Something that detects that you're in the middle of a gunfight vs just chilling in the lobby, and selects/modifies the music appropriately.
Some video games do this automatically with their in-game music, but not all, and many streamers prefer using other music to avoid it getting stale.
I feel like that album has a lot of potential for generative experiments. Admittedly, the album has an over-arching tone of eeriness throughout, which isn't something I want to listen to while I work most days. Maybe it would be inspiring to game developers working in the horror genre :)
Actually, speaking of games with Aphex Twin's music, I'm not sure how many folks here recall the Dreamcast classic 'Rez' - a synesthesic, psychedelic rail-shooter which actually featured a track of his, under his legal name 'Richard D. James'.
If you guys haven't played it, seriously give it a shot. I believe there is a re-released version on XBOX and PC, though I can't speak to the quality of the ports, as I still play my Dreamcast almost daily. ;)
Okay, that seems to directly contradict my actual memory of the game! I'm absolutely going to have to confirm that. I distinctly remember reading 'Richard D. James', and recognizing the name, as I was a massive fan of 'Drukqz'.
Strange how memory affects us.
EDIT: And whom lucky enough to own one does not remember the Dreamcast fondly? ;) Crazy Taxi, Quake III, Skies of Arcadia, Ecco, THPS2 - I'd argue the ratio of quality software to shit software was literally almost 1:1, compared to, for instance, the PS2, where you might get 1 great game for every 10.
I recall there's a Dreamcast disc image floating around somewhere with a different music selection. Depending on where you got your copy of Rez, who knows what you've got. :)
I just pulled my dreamcast out to play some of the games with my son. It's still a great system. There are some low quality games, but it definitely feels like you have to seek them out; as opposed to the shovel ware feeling I get on the PS1/PS2. Looking at the released game list though, I feel like some of these titles were probably good at the time, but won't hold up; but many of them are still as good as they were. VGA output is pretty nice too.
All the ports have been fantastic. The PS4 version supports PSVR, and the experience of Rez in VR is quite magical tbh.
Tetsuya Mizuguchi, the producer of Rez, also worked on Tetris Effect which has an amazing soundtrack. Tetris Effect feels like a weird cousin of Rez btw!
Rex Infinite is the only game that I strongly prefer to play on a Vive over a Rift. The Rift is more comfortable, but the Vive has a higher peak brightness. If you crank the settings in the game, the difference between headsets is significant.
Ps4 vr port[1]. I say do it. I say it's the only thing with having in vr. I say a bunch of stuff much of it more nonsensical than this. You'll know if you need ps4 vr now or if you don't. There is no trance vibrator or game girl advance review.
One thing I like to do is combine the tracks from each disc, so I listen to order like track 1, 13, 2, 14, etc. Most of the beginning tracks are less creepy.
As mentioned in that article, the eeriness comes from him creating the album through lucid dreaming, allegedly. Here's the original interview about that, its pretty fascinating:
Interview with David Toop, March 1994, The Face
Broaching this subject of dreams, he becomes animated and talks a long streak. "This album is really specific," he says, "because 70 percent of it is done from lucid dreaming... To have lucid dreams is to be conscious of being in a dream state, even to be capable of directing the action while still in a dream. I've been able to do it since I was little," Richard explains. "I taught myself how to do it and it's my most precious thing. Through the years, I've done everything that you can do, including talking and shagging with anyone you feel that takes your fancy. The only thing I haven't done is tried to kill myself. That's a bit shady. You probably wouldn't wake up, and you wouldn't know if it had worked, anyway. Or maybe you would.
"I often throw myself off skyscrapers or cliffs and zoom off right at the last minute That's quite good fun. It's well realistic. Eating food is quite smart. Like tasting food. Smells as well. I make foods up and sometimes they don't taste of anything—like they taste of some weird mish-mash of other things."
...
"About a year and a half ago," he says, "I badly wanted to dream tracks. Like imagine I'm in the studio and write a track in my sleep, wake up and then write it in the real world with real instruments. I couldn't do it at first. The main problem was just remembering it. Melodies were easy to remember. I'd go to sleep in my studio. I'd go to sleep for ten minutes and write three tracks - only small segments, not l00 percent finished tracks. I'd wake up and I'd only been asleep for ten minutes. That's quite mental.
"I vary the way I do it, dreaming either I'm in my studio, entirely the way it is, or all kinds of variations. The hardest thing is getting the sounds the same. It's never the same. It doesn't really come close to it. When you have a nightmare or a weird dream, you wake up and tell someone about it and it sounds really shit. It's the same for sounds, roughly. When I imagine sounds, they are in dream form. As you get better at doing it, you can get closer and closer to the actual sounds. But that's only 70 percent of it."
I vaguely knew that Richard should be a, ahem, trickster in interviews, but now I have a piece of evidence: hitting REM sleep in ten minutes? Yeah right.
> Admittedly, the album has an over-arching tone of eeriness throughout, which isn't something I want to listen to while I work most days.
I think it may have to do with me being ADHD, but I actually like ambient music with a little bit of a stressful edge to it. It gives me a little bit of urgency and controlled stress seems to be my best motivator.
Green Calx, while an awesome track, always breaks my concentration. I have made many playlists named SAW1-GC on different platforms and have gone so far as considering using a utility knife to cut a custom diagonal skip groove through it on one of my three vinyl copies of SAW85-92.
One path to explore is to start with interesting covers of Aphex Twin works:
- The Alarm Will Sound collective has an album called Acoustica where they made acoustic-instrument-based covers of Aphex Twin songs. https://www.alarmwillsound.com/
Gerald Donald's work (alias Dopplereffekt, Arpanet, Japanese Telecom, was part of Drexciya) works well for me for programming for years. Especially the Album "Inertial Frame" from Arpanet. Calabi Yau Space from Dopplereffekt as well (which was released on Aphex Twin's own label Rephlex)
I agree now with BOC purists that it's not their best record, but "Dayvan Cowboy" came at such a tectonic moment of my life I wouldn't decisively argue it didn't start the earthquake.
The Mirror's Edge soundtrack got me into Solar Fields, and it's remained in my programming rotation since it came out, along with that of Mirror's Edge: Catalyst.
You should check out:
- Autechre
- Square Pusher
- Bonobo
- Wisp
- Brian Eno
In that order. All solid. If you want more you should checkout the Rephlex records current and previous artists list. All really solid ambient focused electronic music.
Bibio’s Phantom Brickworks do remind me a bit of stone in focus. I think Capel Celyn is an especially beautiful piece, evenmore so if accompanied by the music clip.
Thanks for writing this and spending the time to break down and explain how you did it.
I’m not the biggest apex twin fan, but I’ve followed him for a few decades and always liked his visual and audio tricks mixed into his music. I feel like he would enjoy the idea of an infinite track and hope he responds somehow.
Are there any file formats that allow generative music so I can download this and play in a non-internet connected situation?
generative.fm is a progressive web app, so it _should_ work offline with the caveat that you'll need to play a piece once online before it works offline. I'm working on getting this communicated through the site but just haven't gotten around to it yet.
In knowing the original track very well, I find this version fascinating but also frustrating to listen to. It really emphasizes for me the importance of well thought out note placement and timing and the connection throughout, especially in minimalist pieces. This might sound pompous, but it's like the original seeks to tell a story, but this version is like pulling words and phrases out of a hat; the emotional payoff doesn't exist for me. I hope that's not discouraging in any way, as I think this type of experimentation should be celebrated and the writeup is excellent. I'm only speaking of what effect this has on me tied to this particular piece of music.
I know what you mean and I don't find it discouraging at all. Your analogy is perfect. The way the original piece evolves and builds on its phrases over time is very noticeably lacking in the generative version. In my version, any emotional buildup from one phrase to the next can only happen in short spurts at best, and it happens completely by accident. Of course, if I was putting on music that I wanted to really focus on and enjoy, the original would always be my choice. For me, the appeal of an endless version was that I could turn the music into ignorable ambience for my environment. Since we're making analogies, to me it's a bit like seeing a painting you like and saying "Gosh I like that color," then painting your walls that color. It can't compare to the painting but it might remind you of it.
Came here to say this. The generative version has no story, so the emotional aspect is quite watered down, even if the general mood persists. Funnily enough, that is probably perfect for the use case of this kind of thing: background music.
Given that the way the phrases evolve, and also repeat, is what tells the story, maybe a second layer of markov chain driving the phrase choices would help?
The story can only come from understanding the piece. The understanding requires a deeper knowledge. Interesting that it lacks 'emotional intelligence' - is that something which can also be learnt?
Doom from 2016 had this really cool feature in it where the music was procedurally adaptive based on the context of the game. When you were getting into a battle the music would mutate and become a lot more aggressive. Small fights would be different from big hairy fights. Ending a fight on low health would be different from leaving it unscathed.
For long time I've always thought it would be great to have something that had a similar effect based on text input speed.
Situations like getting into a massive battle, or leaving a battle with low health would have the music mutate to
Mario 64 did it in 1996. For example that early level with the shipwreck, the music morphs as you go into the water and dive around the wreck. If you go out of the water again the music morphs back to the original form.
I'm not sure that that counts. My recollection of that game was that it's a binary choice of being underwater/at the surface, whereas the music in HL2 in a given spot would change depending on what the enemies were doing, how fast you were going, whether or not you were driving, if you found ammo for the rocket launcher, etc.
Lots of N64 games did this, especially the Rare ones — Banjo-Kazooie had a fair bit of music dynamism, Conker's had even more. It's pretty easy to do, and game devs figured it out pretty early on.
I'm glad you mentioned them. Autechre produced my favourite hacking background music.
I agree, people interested in the generative music should definitely give NTS Sessions a listen. But for the newcomers I'd suggest to start with their early IDM albums(Incunabula, Amber) as NTS Sessions may sound too "glitchy".
I think nobody else goes where they are going right now. Very forward looking and absolutely amazing listening when you "get it". Elseq, NTS Sessions and now these 40+ hours of live archives released. There is an algorithm that just does things to sound. Sometimes it's just noise that turns into the most beautiful thing in the world.
Their live shows are these kaleidoscopic tours through the sounds (but not songs) of their various albums, recognizable only in passing, in fragments. Their albums are each distinct, but immediately recognizable. Most people can't stomach their music, but those who can swear by it. They aren't afraid to stretch out and take an hour+ for a single song.
Generative music comes full circle. This side of Aphex Twin is more or less directly inspired by Brian Eno's Music for Aiports. Which itself comes out of Eno's generative music experiments.
Are you sure it was so insprired? Early IDM artists have sometimes admitted never hearing their forebears (Stockhausen, Steve Reich, etc.); they just reached similar concerns independently by noodling with electronics.
Richard D. James only heard Stockhausen for the first time after the press had been claiming his early works were influenced by Stockhausen. It turns out they weren't.
By the 1980s most Stockhausen recordings were out of print (as Stockhausen had bought the rights back from Deutsche Grammophon), and Stockhausen was no longer traveling widely to promote his works and he had become rather reclusive as he focused on writing the LICHT operas. So, there was limited opportunity to hear his work and his influence on electronic musicians of that decade is overstated. The advent of drum machines and then PCs that, as people discovered, could be modified to produce unusual sounds is really what sparked electronic experimentation for most producers of that generation.
And you could argue Eno's proceduralism was influenced by Cage, and Cage's proceduralism from Nancarrow... and on and on the family graph of influence goes. Not sure where the full circle is, but there's plenty of thread to tug on here.
Although I like the result, I find the track less soothing without the birds in the background. There's probably a way to generate those as well though, I suppose.
Agreed. I considered adding birds to it but I've found if I play this track in the morning and open my window so I can hear actual birds it's a nice experience.
Sadly these office windows that surround me don't open up, and so I'm left to listen to your awesome work paired with the hum of AC blowers and fluorescent light ballast.
Oh wow... an endless generation of D.R. would send you straight to the asylum! as much as I love the bonkers-ness of Donkey Rhubarb, it's existing length is quite enough for one sitting!
One day, someone will feed the Come to Daddy video into a machine learning cluster somewhere and turn it into a 12 hour video that could give Charlie Brooker nightmares.
There are a number of languages / environments to help with the creation of generative music under the umbrella term 'live coding' - the community also tries to keep the performance aspects of music intact.
As if aisatsana was not beautiful enough, it was first performed in London's Barbican Centre in 2012, on a suspended grand piano, swinging in the air !
Also do some research into "prepared piano" to understand how some of the timbres are achieved. Pretty sure there is some "preparation" done to the hammers/strings on the Disklavier in these songs.
Many of the pieces are playable, like Avril 14th. There are a number of people [0] playing it on YouTube and I've been able to knock out a rendition as well.
I've used my nose to hit individual notes in the middle of the keyboard while using my hands at the extreme ends.
I didn't listen carefully to the whole track, but I think you could handle what I did hear that way. Not certain that would work here, just thought I'd point out that there are options beyond your fingers for pushing keys.
J. S. Bach is reputed to have held a stick in his mouth to have an additional note on tap - much more practical and playable than my stupid nose trick. If you used a forked stick or a crafted tool you could easily get more than one note, for that matter.
Ever since druqs Aphex Twin’s albums have contained both fast-paced techno, and quiet little solo piano pieces that sound quite well within the range of what one human can play.
Alberto has a much more 'traditional' song structure, with a specific melody/harmonies, and well defined sections. I feel like this Markov Chain process is best suited for more loosely structured ambient tracks. That said, I'd be curious to see the results!
Generative.fm was shared here last week, spawning a number of interesting discussions. If you enjoy the comments here, I suggest checking out that thread.
I find that perfectly quantized tracks sound artificial over time. Not sure how close “aisatsana” sticks to perfectly on-(half)-beat, but adding slightly random quantization offsets to each generated note could be interesting to lend the results a more "natural" sound
Nice! Simple, but works out super nicely :) Even the occasional odd note doesn't seem too out of place, since the phrases are short and "self-contained".
I also enjoyed the rest of the pieces on generative.fm, they all have a specific "character" which is quite rare. Nice work!
This is excellent. Thank you so much for sharing (and it's certainly a track that I've also wished was longer!) I'd be interested to see if this approach can be implemented within a DAW? This would allow the notes to be played and then treated with FX, EQ and mastering (or maybe just some sound design to get it sounding even closer to the original)? At a push, one could run the output of the browser through the DAW I suppose :)
Similar to what another commenter mentioned, I've done some experiments creating a virtual MIDI port from my code where I push all the notes to. A DAW can then read from this like any other MIDI device, as if it's just a MIDI keyboard that someone is playing. Another approach is to generate a MIDI file of some specified length and load that in the DAW, which is nice as it doesn't need to be generated and recorded in real time.
Sure - even the built in midi effects are enough to do a lot of generative stuff in Ableton, and when you get into Max the sky is pretty much the limit.
But, you could pretty much feed any source of generated midi into a DAW in real time in multiple channels and then have effects on different channels, etc.
True! Also, if you are not able to afford Max/MSP, you can do basically the same stuff (although with a clunkier UI) with Puredata, which is a live coding 'graphical language' setup just like Max . In fact it is (or was idk) developed by the same author, Miller Puckette.
On Windows, you can use a package like loopMidi to create a virtual midi port which you can use to output the live generated midi data to any daw.
CBC Radio 2 used to (or might still?) have a similar program called The Signal that was really good. And before it, there was a program called Brave New Waves that was probably my favourite late night show in existence. Really sad it doesn't exist anymore.
One tactic to roll your own is time-stretching. A useful and fun algo (since tamed in 1999) is the phase vocoder. Especially nice for instruments with rich spectrums.
You can use audacity to record audio coming out of your computer - just change "MME" to "Windows WASAPI" next to the input/output device settings.
Then, play audio from generative.fm and record it for 1-2 hours. You can export to MP3 or WAV when done.
I did something similar (but with OBS) in this video where I remixed music from the game Mirror's Edge Catalyst using Sonic Pi - https://www.youtube.com/watch?v=gQ8dD5Bz3_E
How did this come about? At some point you realized you were listening to mostly Autechre, and made a conscious decision to cut everything else out? (I’m listening to LP5 now, somehow it has always been my favorite)
What is a good way to go about working with a new JS library? I only have used Python, Java, and C#. All the tutorials I find on tone.js just include the js code but not information on how to link the library.
I would like to play around but getting the dev environment has stumped me.
there are many code playground sites[0] that could make the process less painful for you. essentially you will need to reference the library. you can do it locally, or reference the publicly available version (most libraries have a CDN hosted version).
you could also google for the playground site + library, because there may already be a playground setup (a project with a reference to the library already set) for that library somewhere. i.e.
How do I tell what language to use with this package?
From what I can tell it uses react. I have tried node, angular, and react but this installation page confounds me [0]. as far as I understand, I have been using either npm init, ng init, or create-react-app to initialize the directory for an example project. Then I do npm install tone inside of the directory I created.
I have found this [1] playground for tone but it does not elucidate how the library should be or is referenced.
I'd like to work with generative music but the amount I must know and choose between in a js project always seems to freeze me at the project init phase.
In the process of editing this comment I have finally gotten tone.js to work. Here are the steps i followed:
npm install create-react-app
create-react-app tone-test
cd tone-test
npm install tone
add "import Tone from '../node_modules/tone'" to the top of App.js this step is what I was messing up previously i believe
then I just throw tone commands at the bottom of App.js and they play on pageload, this is exactly what I wanted for now.
That's so awesome. It'd be nice to have the bird noises in the background as well though I guess this would be tricky if it's using MIDI to play the music.
I guess I could find some ambient bird noise track and play it alongside...
MIDI (only controls sounds, does not generate them) is widely used to trigger samples, so slip some samples of bird noises into your mix and trigger away.
> Aphex Twin (aka Richard James) is known for creating original, complex sounds whenever he can, but his next creation might just take the cake. He tells Groove that he hired a programmer to develop music software based on mutation. Once you give the app an audio sample, it automatically generates six variants on that and asks you to pick your favorite before going on to create more variations -- think of it as natural selection for sweet beats. The software still "needs to be tweeked," and there's no mention of a public launch, but the early output reportedly sounds "totally awesome." Don't be shocked if one of James' post-Syro albums uses this software to create some truly one-of-a-kind tunes.
I can't speak for him, but personally if I was him I'd say 'finally! I've only had these algorithms printed on my records for ~10-20 years'
But, realistically, that'd be like in the Hitch Hiker's Guide to the Galaxy when they said 'but the notice was in Alpha Centauri the whole time! How did you not know?'
He's basically the father of modern claustrophobic ambient with his Selected Ambient Works pt2. A very special thing to fall asleep and wake up while it's still playing.
Diverged from Brian Eno's more warm and happy tunes to something darker.
Agreed on the ambient classification but I always think of him as much more than that as far as influence goes. He's basically had an influence on a huge portion of electronic and non-electronic music over the past few decades.
And Come to Daddy was a joke against the current mainstream, that was The Prodigy in those days. Although most of his stuff has been some kind of a joke always, even though its usually brilliant stuff.
Mentally I have a third group for his music, in addition to the two you mentioned, which is full of fast paced melodic music like 'xtal' that doesn't have any of the harsh beats of his music like Come to Daddy.
Actually, Aphex Twin and John Cage are both famous for using a technique called 'prepared piano', in which the artist either intentionally uses or intentionally destroys a piano, e.g. by shoving objects into the strings within the body to create an intentionally detuned or broken sound.
Has this guy never heard of a bar or measure? Reading through the first section where he describes the song as sections of 16 beats makes me cringe, and this is coming from someone admittedly horrible with music theory.
Edit: I was turned off by his constant usage of the word “beats”, not phrase.
Thanks for the feedback. I figured people who understood music would feel this way, but I deliberately chose to omit as many musical words as possible so that readers who weren't familiar with them wouldn't get lost. I would have felt obligated to define "bar" and "measure" if I used them, and ultimately I decided they weren't necessary. Sure, I could have defined what a measure was, that there are four beats in a measure (in this particular case), that each phrase is four measures instead of 16 beats, and what time signatures and quarter notes and eighth notes are. However, I believe all of this would have bloated the article and alienated readers who aren't as familiar with music theory as you and I are. As is, the article is still perfectly readable to someone who does have an understanding of music theory, and all we have to put up with is some non-traditional terms like "half-beat."
Good point. I miswrote the original comment, anyways, I meant beats.
I thought a bar is 4 beats, a measure is 4 bars (in 4/4)? Going to have to look this up but is measure == phrase == section? Or I guess, a section can be arbitrary number of bars depending on song structure.
There's a fair amount of looseness in how these terms are used, in practice, depending on the context.
4/4 means 4 beats per bar, and we'll use a quarter note to represent a beat when writing notation. Another example would be 7/8, which is 7 beats per bar, and we'll use an eighth note for the beat when writing notation.
Looking at music in terms of bars or measures only really matters when creating traditional western sheet music notation. If we're writing computer code, or playing by ear, or looking at other kinds of music notation, those terms like "bar" and "measure" become less meaningful, and other terms become more useful or appropriate for describing the structure of the music.
https://musicforprogramming.net/?fortyfive