It took me way longer than I'd like to admit! Enjoy!
Edit: Here's an MP3 export of it https://soundcloud.com/mitchell-anicas-project/corridors-of-... for easier listening.
Interestingly enough, someone made a remix of this song back in 2012: https://www.youtube.com/watch?v=hiy_wuncQ7s
I first started with CM (Common Music), which also has a one-click download for Linux, Mac and Windows. It also has somewhat of an IDE called Grace, "Grace (Graphical Realtime Algorithmic Composition Environment) a drag-and-drop, cross-platform app implemented in JUCE (C++) and S7 Scheme."  The birds example is amazing! They build up a very real-sounding bird call from scratch.
BUT...to put my PM hat on, I'm not sure who is the target audience here?
A - musicians who want to compose music with code.
B - people who want to learn programming by having fun with music creation.
C - programmers who want to learn music theory.
If the intention is a fun vehicle to to teach programming, then I think there are better ways. I looked thought the samples and I think as a kid I would get tired very quickly. There are so many function calls to learn. Right away I have to understand the concept of random and what it does.
I think if the idea is to empower musicians, this seems like a lot of work to create music.
At the end, you have to know some music principle and already understand a lot of different programming concepts to actually do anything.
This combination makes doing programming dependent on knowing music theory and creating music knowing a lot of programming and exploring all the different functions and libraries to then apply your music theory to.
To me it would be frustrating. And for musicians who I guess, can potentially create some sounds that might be difficult with existing tools, just to hard to do.
It would have been far better to have a simple 2D game board (tanks, robots, etc. I know it's been done) as a programing tool than inside a complex world of music theory.
Or at least make the libraries not dependent on music theory and provided some higher level presets with built in abstractions. Example: loop song(type: techno_beat, length:300) (do some stuff here) end_loop
Make the user feel like superhuman musicians.
We are humans beings and we like to create. The digital computer is a meta-tool, in the sense that it is possible to invent entire new mediums of expression within it. This appears to be a particularly interesting one. I think you underestimate kids. I know I would be very excited to have access to this tool back in the day, and would be delighted to be able to use randomness to manipulate sound. Being called a "superhuman musician" for pressing a button would make me want to puke.
Take your PM hat off sometimes and smell the roses.
He mentioned that he tried starting with simple games like you suggest but that kids have unrealistic expectations of what they'll be able to do and it leads to disappointment. They can imagine a commercial quality game quite easily but have no hope of creating one. But with music they're excited to be able to make any sound at all and don't expect to be able to create a film score.
Very interesting. I guess that could also make sense.
When you delve into the world of electronic music, there are quite a few using such tools, doing live coding performances on their concerts.
I believe there are commercial options, as well, but I'm not familiar with them.
@darpa_escapee - thanks for guiding me into that rabbit hole. Csound looked the most promising to me. Pure Data needs more screenshot and samples, the site was very hard for me to dive through and Overtone Clojure it's cool. Wanted to learn that at one point, but I'm sticking first with learning Haskell.
For a more code-like experience, SuperCollider seems pretty fun, though I haven’t gotten deep into it. ChucK (http://chuck.cs.princeton.edu) is another neat option, if you really want to feel like you’re controlling every sample that goes by.
The most impressive thing you can do with it is live coding: https://www.youtube.com/watch?v=KJPdbp1An2s
At the end of the day, computers simply read language and convert that into some basic operations. But still, the average non programmer feels like software is this black box of mystery which takes years to comprehend. And that's not true!
I know plenty of people programming synth's on Ableton, mostly using their mouse to point and drag. And I suspect a lot of them suffer from 'click and drag fatigue'. Most of them would be able understand all of the parameters being exposed here.
Helping bring practical programming to non-programmers is a noble goal. And I hope they succeed... because what programmers want to keep re-writing the same boilerplate for the next 20 years?
As a Rubyist, I say this is pretty well on-the-mark. People ask me all the time why I love Ruby, why I think Ruby is the best language ever made, why I think in a thousand years, Ruby will have eaten all the other languages.
It's because at the end of the day, Ruby is a far more pleasing mental interface to software systems than anything else. If it's not Ruby we're all programming with in a thousand years, what language it is we are programming in, will look a lot like Ruby.
I've heard this argument for basically every niche language, usually things like Lisp and (color) Forth. I've concluded that different people have very different internal mental models of programming.
I wouldn't call Ruby a niche language. You can use it for anything, it's very much a general purpose programming language. Most people only build websites with it, but you do see lots of other kinds of things built with it too, including, you know, a digital audio workstation.
I'd call PHP a niche language before I'd pin that on Ruby.
Also, how do you make something like this without music theory? At the end of the day, you need an abstraction. Why make a new abstraction for creating music when everybody already agreed on one?
Something with a simple api that can be programmed on the fly is pretty cool. This is something i'm personally interested in checking out and playing with.
I feel like all the best tutorials are written for people who are good with music but need to learn to code. Anyone out there have any good resources for using sonicpi to learn more music theory as a coder?
my impression is that this is coming up every couple of years but nobody so far succeeded at actually producing a system that gains meaningful popularity. not to mention how difficult it was too compile/set up the software for the various projects i have tried.
another problem is that very few YouTube tutorials showcase rythms and melodies going beyond something resembling a ping pong match on speed.
would love to hear your opinion.
First of all, music creation is too chaotic a process to allow for simply getting things right on the first try. Single notes in arpeggios are changed, entire progressions are taken up and down steps, parameters are continuously played with until you find the right levels, and all of these and more are much better suited to graphical abstraction purely for ease of use. I'd much rather spin a virtual knob to find appropriate levels than type and re-type a variable quantity, especially if I have to wait for that quantity to update every time.
Second, music is all about edge cases. Using control flows to automatically change a piece is nice, but not as nice as quickly rearranging tracks in a visual playlist. Deciding that a particular loop should end in a different way is simple in a visual editor: cut off the tail and put something else in, or make one instance of the loop separate from the others and edit in place. These are processes that take less than a second for me, but would involve careful crafting of conditionals to achieve in Sonic Pi or the like.
All of that said, I think this approach probably has its merits. I've been wishing for scripting in DAWs for a long time, and having a synthesizer that supports writing code to modify waveforms or change how parameters link would be awesome (if this exists, someone please tell me). Projects like Sonic Pi, though, seem to take this past the point of usability.
Reaper has a scripting language. It even comes with a few synths written in it, complete with source.
I want a DAW with the flexibility of Reaper and the UX of Ableton.
For starters, you could have a skeleton of a script with accessible parameters, given knobs. That would look like a DAW, except for text instead of pseudo design with screws and LCDs that mimic real objects (skeumorphic). Yes, you want buttons, visual programming still sucks. Demo coders like Farbrausch program their own demo tools, eg. Werkkzeug 3, for exactly that reason, isn't it? Considering gfx programming as the comparison, of course textures, models and so on are modeled in an analogue fashion. Nobody programs a human.md3 to evolve from an embryo for fun, but in principle, somewhen it could be done. Music is a lot like vector graphic art, you can do a whole lot with simple shapes and gradients. And you can program complicated sound effects perhaps easier than as a 5 second loop rendered to wav and pitched by the DAW, if you know what I mean.
Note composition as you remark is especially besides the point. The drone noise perspective might be an extremely misleading example, but music programming should be able to paint outside the classical frame. It should allow to define sweet points of resonance, instead of chasing harmony by ear. This does require deep understanding, so instead I'm happy with finger painting ... because it's so close to the metal, err, paper.
It's very sad because I have no idea of the potential. Composition to me is choosing an instrument and arbitrating simple known melodies to complexer ones until it sounds harmonious thanks to obeying the circle of fifths, but that's mostly it and mostly rather superficial, which doesn't matter as long as the instruments sounds niceand if it doesn't I'll split the melody by octaves e.g. and choose two different instruments, alter the octaves to get a high contrast (shout out to my man). Because of the loop nature of pattern based composition, I am mostly not interested in arrangement. This again compares to shader programming. And even big studios basically just stitch together single scenes. ... yadda yadda yadda.
You might also compare the violin to the voice. Far more people can or think they could sing. Making the violin sing is just much more complicated, but not exactly boring.
I'm working on a DAW that you can live-code with JS and math expressions if you're interested: https://ossia.io
C++ just-in-time compilation of sound effects is coming in the few next months (JS just does not cut it for real-time audio with per-sample access).
Here's a live coding video from its front page: https://www.youtube.com/watch?list=PLybSFICi4UliK17U6rxPneXA...
Given all the other tools available, from DAWS to trackers to VSTs to hardware synths, why would a musician - as opposed to a coder - want to climb the incredibly steep learning curve?
There should be a special name for this fallacy, because it occurs so often on HN.
Just because a domain looks a bit like a trivial mathematical operation to mathematically inclined outsiders doesn't mean that the math really is trivial, or even that the real core of the domain is best summarised as a trivial mapping.
To a coder, music looks like a sequence of instructions that make sounds, so of course it's natural to assume that it's just like code. Music is a series of events, so let's write code that makes a series of events. How hard can it be?
To a musician, music is tactile, improvisatory, and sculpted. It's nothing like code. At all.
Even if you're using a DAW with a mouse, you're still shifting elements around in time and sculpting fine nuances of the the sound with controller curves.
So code is a terrible UI for music, and live code is even worse. You have to spend so much time on irrelevant distractions - creating buffers, managing objects, iterating through arrays - that there's almost no connection left between the sounds that are being made and your expressive intent.
So live coding only works if your expressive intent is trite and lacking nuance and depth. The only people who do it are hobby coders and a small community of academics who are trying to sell it as a valid revolutionary activity.
Interestingly trackers, which are by far the most successful coding environment for music, also have the lowest conceptual overhead.
% lsb_release -sic
% apt-cache madison sonic-pi
sonic-pi | 2.10.0~repack-2 | https://deb.debian.org/debian stretch/main amd64 Packages
sonic-pi | 2.10.0~repack-2 | https://deb.debian.org/debian stretch/main Sources
Earsketch sort of looks like what I'm looking for, but it's web only and I can't much get it to run?
Unfortunately, last time I tried a few years ago, I couldn't get it to compile.
Does anyone know if there's another good Scheme alternative to it that doesn't have a mountain of dependencies?
 - http://commonmusic.sourceforge.net/
The idea, though, is really, really cool!