
Csound: A sound and music computing system - diaphanous
https://csound.com/
======
flats
Long-time user of Csound, Max, and Pd (& have spent a little time with ChucK &
Supercollider).

Max & Pd are fantastic for incorporating sensors & other physical interfaces,
real-time interactivity, routing signals, creating visualizations, & all that.
Supercollider is fantastic for generative music & using control flow
compositionally.

Csound, on the other hand, is really, really great for creating beautiful &
nuanced electronic compositions. The sound quality is unrivaled (emphasis on
accuracy over interactivity), the interface is great (plain text, which, as a
programmer, I prefer), and GEN routines plus the myriad opcodes allow you to
do some heavy, intricate aural spelunking.

As a bonus (& as mentioned by others), it’s highly performant & fairly easy to
integrate with other languages & environments (including Max & Pd!).

The syntax is a bit strange, but once you get over that, Csound is an amazing
piece of software.

~~~
dominotw
> beautiful & nuanced electronic compositions.

wondering if you have any favorites that you can recommend.

~~~
andrewg
This one’s my favorite:
[https://m.youtube.com/watch?v=TecDlpGAhq0](https://m.youtube.com/watch?v=TecDlpGAhq0)

Really beautiful album, especially the 5.1 surround version.

~~~
lukasb
I clicked the link and was like "ha, what, no way BT uses CSound" but I was
wrong!

[http://simoncpage.co.uk/blog/2008/10/bt-this-binary-
universe...](http://simoncpage.co.uk/blog/2008/10/bt-this-binary-universe/)

~~~
TheRealPomax
There's no real reason to consider csound any different from any other
synthesiser in this respect: someone who understands how to program a synth
just needs to know "which knobs and switches" there are - whether they're
physical hardware, a virtual instrument's GUI, or xml statements doesn't
matter all that much... the only thing that matters is being able to set and
control VFOs/LFOs, ADSR, resonance, unison, etc. =)

And dear lord do some people _understand_ synths.

~~~
kian
any links to anyone who both has this deep understanding and teaches it?

~~~
TheRealPomax
Youtube is actually an excellent resource here - search for "intro to
Synthesis" or "intro to synthesizers", and move up to "advanced synthesis" (or
related terms) and there are boatloads of excellent videos teaching you more
than you ever thought was possible.

------
musikele
I've used CSound for a university project some years ago, to manipulate a
bunch of files to play together - adjust volumes, modify speed of playing
without modifying tonality, etc. I then wrote a GUI to manage CSound. After 8
years, here's what I remember:

\- It was very complicated to wrap a C library in Java, at least for a new
graduate like me.

\- There was no learning material for the kind of things I needed to do
(manipulating wav files). learing CSound at the time was a mess - no blog
posts in 2010.

\- If I read the CSound source file I wrote in 2010 I'm sure I would not
understand a single line.

Some years later, when Coursera was just starting to offer free courses to the
world, I encountered another musical programming language called ChucK
([https://chuck.cs.princeton.edu/](https://chuck.cs.princeton.edu/)). The
course was very well done and ChucK too, was so simple that I almost decided
to rewrite the whole project with it. I wrote some very nice pieces of music
while studying ChucK.

Good luck to all the musicians out there!

~~~
DonHopkins
I don't know what the state of CSound's SWIG support was when you integrated
it with Java by hand, but CSound now uses SWIG to generate Python and Java
wrappers. (I know what you mean: writing wrappers for something as complex as
CSound for any language is an enormous tedious error-prone pain in the wazoo,
which is why it's so nice to have a tool like SWIG that does it automatically
for you.)

[http://write.flossmanuals.net/csound/a-the-csound-
api/](http://write.flossmanuals.net/csound/a-the-csound-api/)

The cool thing about SWIG (Scripting Wrapper and Interface Generator) is that
it supports multiple scripting languages (and other kinds of languages, since
it's debatable if Java is a "scripting language"). It understands most of C++,
so it can automatically read in header files and generate wrappers from them,
but you can also tailor and customize the interfaces and how it marshals
different data types back and forth, to create more efficient, convenient
wrappers, too.

[http://www.swig.org/](http://www.swig.org/)

SWIG is the brainchild of David Beazley, Python hacker extraordinaire. His
talks are amazing!

[https://www.dabeaz.com/talks.html](https://www.dabeaz.com/talks.html)

[https://en.wikipedia.org/wiki/David_M._Beazley](https://en.wikipedia.org/wiki/David_M._Beazley)

>David Beazley is an American software engineer. He has made significant
contributions to the Python developer community, which includes writing the
definitive Python reference text Python Essential Reference, the SWIG software
tool for creating language agnostic C and C++ extensions, and the PLY parsing
tool. He has served on the program committees for PyCon and the O'Reilly Open
Source Convention, and was elected a fellow of the Python Software Foundation
in 2002.

------
phronesis
This track by BT was written entirely in Csound and is one of my favourite
pieces of electronic music:
[https://www.youtube.com/watch?v=ve8WaGmyhfI](https://www.youtube.com/watch?v=ve8WaGmyhfI)

~~~
zebproj
Yeah this piece was a huge piece of inspiration for me when I was learning
Csound. I actually had a chance to ask BT about this particular piece once and
if the rumors were true about it being all done in Csound. The guitar riff you
hear halfway through the piece is actually sampled and played via the diskin
opcode. Other than that, all Csound. Still a very impressive feat, especially
considering the fact that the piece itself was originally mixed in 5.1.

~~~
phronesis
Nice! I'd wondered about that guitar, it sounds impossibly real to be pure
Csound. Thanks for the insight. And yes, I had the opportunity to listen to
the whole album on a decent 5.1 system at uni a while back and it's pretty
spectacular.

------
vortico
I'm looking for a volunteer to add Csound as a script backend to VCV Prototype
([https://vcvrack.com/Prototype](https://vcvrack.com/Prototype)), or any other
scripting language of your choice that's not already available. Open an issue
if interested. [https://github.com/VCVRack/VCV-Prototype#adding-a-script-
eng...](https://github.com/VCVRack/VCV-Prototype#adding-a-script-engine)

~~~
rorywalsh
You can already use Csound in VCV Rack. Check out this project:
[https://github.com/rorywalsh/CabbageRack](https://github.com/rorywalsh/CabbageRack)

------
ofrzeta
There's also ChucK from Princeton which I find more intuitive than Csound or
alternatives such as Supercollider. Much smaller community that Supercollider,
though, I guess.

[https://chuck.cs.princeton.edu/](https://chuck.cs.princeton.edu/)

~~~
zebproj
Also a fraction of the DSP capabilities, even with the available third-party
plugins (chugins).

------
zebproj
Csound is a fantastic sound design tool. It was my main composition tool for
many years, before I started building my own system. The score/orchestra
paradigm is incredibly powerful. It's something I miss quite often in my
current system. When I was in music school, I learned how to program by
writing programs in Python to generate scores that could then be played by
Csound. Scores are so trivial to generate, that you actually don't need to
learn that much programming to generating very satisfying algorithmic
compositions.

I appreciate that the title of this post calls it a "sound and music computing
system" rather than a "musical programming language". In truth, Csound is more
of a text-based modular synthesis environment than a programming language.

As others have said, the orchestra syntax is a bit strange at first, but you
do get used to it. Writing Csound code feels more like patching a modular
synthesizer rather than writing a computer program. It's basically a DSL for
connecting small sound/signal modules (called opcodes) together. Most of the
time one thinks about things in terms of signal flow and not computer logic. A
common mistake I see new Csounders make is to immediately reach for the
conditional statements and loops. They often don't behave the way you expect,
so people get frustrated.

The Csound dev team has a very strong emphasis on backwards compatibility, to
the point where the older opcodes do not get bugfixes in case someone is
exploiting the bug in the compositions. The programmer in me groans a little
bit, but the composer takes great comfort in the fact that pieces I write now
will be playable for many years or even decades (some of the Csound test
pieces, like Trapped In Convert by Richard Boulanger or Xanadu by Joeseph
Kung, are over 30 years old and still run).

I've been told that many works written in MusicN languages (a precursor to
Csound) have been ported to run in Csound, which means that the legacy of
Csound includes computer music written in the 60s! I wish I knew where to find
those, as I quite enjoy computer music history.

~~~
severak_cz
> Writing Csound code feels more like patching a modular synthesizer rather
> than writing a computer program.

I agree with this point. I am using Csound for creating VST plugins (with
Cabbage framework[1]) for which is Csound extremely productive. I can have
working prototype (which I can actually play on my keyboard) ready in
something like 15 minutes.

Once you get over somewhat strange syntax[2] and understand difference between
k-time and i-time[3] you can do any DSP processing without actually diving
into hard math.

Cabbage has nice beginner documentation on Csound[4].

[1]: [https://cabbageaudio.com/](https://cabbageaudio.com/)

[2]: output operand, paramA, paramB

[3]: k-time - on every step of audio processing, i-time - on initialization of
instrument/note

[4]:
[https://cabbageaudio.com/docs/file_structure_and_syntax/](https://cabbageaudio.com/docs/file_structure_and_syntax/)

------
fushifushi
I’ve been learning Csound for about a year now, and I’ve documented my
experiences in my blog at
[https://jasonhallen.com/blog](https://jasonhallen.com/blog). My first three
blog posts talk about 1) why I chose Csound over Max/MSP, SuperCollider, and
ChucK, 2) how mysterious the online presence of Csound is, and 3) what
resources have been most helpful for me as I’ve learned Csound.

I love Csound, but it has been very challenging to learn. The first two or
three months were an uphill battle where I thought about quitting a few times.
Admittedly, I was brand new to both computer programming and digital audio
generation. I’m sure if I had more experience with those then picking up
Csound wouldn’t have been as painful. The biggest early challenge for me was
understanding how the three variable rates (initialization rate, control rate,
and audio rate) work. I felt like I had to hit my head against a wall for
three weeks before that clicked. But once you become familiar with the basic
principles and syntax of Csound you start picking up other topics and opcodes
quicker.

I’ve used both CsoundQt and Cabbage as Csound development environments. They
each have their pros and cons and best use cases. I’ve been using Cabbage more
because it has an ingenious way of controlling the instrument interface within
the Csound code itself.

There are a ton of opcodes available to do many sound design techniques out of
the box, or you can code your own opcodes to do anything you’d like. The user
community is very responsive and helpful on the Csound listserv
([http://csound.1045644.n5.nabble.com/](http://csound.1045644.n5.nabble.com/)).
And the developers have made it so you can integrate Csound with all sorts of
other languages and software.

------
processing
Love Csound.

Does anyone use Csound for Live? It's no longer working since the Ableton 10.1
update and the Developers do not respond to emails. Incredible set of audio
tools.

[https://csoundforlive.com/](https://csoundforlive.com/)

Anyone managed to fix in Max/Cabbage for Live 10.1/Max 8.1?

~~~
rorywalsh
Latest Csound object for Max is here:
[https://github.com/csound/csound_tilde/releases](https://github.com/csound/csound_tilde/releases)
And I've not had an reports of problems with Cabbage in Live 10. It's what I
use myself for testing..

------
hnhg
How does something like sonic-pi or tidal cycles compare with this? I wanted
to try those out since they look fun and accessible. [http://sonic-
pi.net/](http://sonic-pi.net/)
[https://tidalcycles.org/](https://tidalcycles.org/)

~~~
Optimal_Persona
Sonic Pi (IMO) is very quick to get started with because it's batteries
included - you have everything needed in a single installer. Within the UI are
tutorials, samples, synths, effects, a language reference, metering, 10 code
buffers, and an OSC server built in - so once you config audio/MIDI devices
you should be good to go. Sonic Pi is based on a subset of Ruby (though it's
not clear which Ruby version, and what's left out) and Sam Aaron (dev) has
indicated he's considering moving to another language. Though I haven't played
around with Raspberry Pi, Sonic Pi integration is another benefit if you're
into hardware.

I remember trying to install Tidal Cycles a few years back and having some
difficulty - IIRC it's a library that requires other components/config. Same
for Overtone [1] (Clojure frontend to SuperCollider), it took a bit of time to
configure Leinengen at first.

Cabbage [2] interface to CSound would probably be closest to Sonic Pi, rather
than straight CSound iteself.

Also worth investigating is Pyo [3] - a Python interface to DSP code written
in C.

So many interesting (and free) choices - it really just comes down to your
language preference. I make a lot of music in DAWs like Ableton Live & Apple
Logic, but I have some fairly original, very specific ideas around exploring
pitch, rhythm & harmony that a text-based language is better suited for.

[1]
[https://github.com/overtone/overtone](https://github.com/overtone/overtone)

[2] [https://cabbageaudio.com/](https://cabbageaudio.com/)

[3]
[http://ajaxsoundstudio.com/software/pyo/](http://ajaxsoundstudio.com/software/pyo/)

~~~
hnhg
Very helpful, thanks.

------
DonHopkins
From "Recontextualizing Ambient Music in Csound" by Kim Cascone, on the CSound
Community web site:

[http://csounds.com/](http://csounds.com/)

[http://csounds.com/cascone/](http://csounds.com/cascone/)

"One of the motives for being an artist is to recreate a condition where
you're actually out of your depth, where you're uncertain, no longer
controlling yourself, yet you're generating something, like surfing as opposed
to digging a tunnel. Tunnel-digging activity is necessary, but what artists
like, if they still like what they're doing, is the surfing" — Brian Eno
(Aurora Musicalis. ArtForum Magazine. 24:10. 1986)

------
rkagerer
What does it do?

These hint at some neat possibilities but I still don't get it:

 _a sound and music computing system_

 _a tool for composing electro-acoustic pieces_

 _real-time_

~~~
bensonalec
Basically it's for creating music programmatically, conceptually it's solid
although it does lack some major features (you have to convert frequencies to
notes, and as far as I'm aware there's no inherent option to use the note
itself).

~~~
kroger
You can use different ways to enter pitch [0]. A common way is to use the
"octave.pitch" format, such as 08.04, where 08 is the central octave (if I
remember correctly) and 04 is the note E. It's common to use a score generator
such as PythonScore [1, 2], and some people (myself included) like to use a
regular programming language to generate the scores. I've used Common Lisp,
Tcl, and Python in the past. Csound has a few frontends [3] that people may
like. AFAIK, Csound can read notes from a MIDI keyboard as well.

[0]
[http://www.csounds.com/manual/html/PitchTop.html](http://www.csounds.com/manual/html/PitchTop.html)

[1]
[http://jacobjoaquin.github.io/csd/pysco.html](http://jacobjoaquin.github.io/csd/pysco.html)

[2] [http://write.flossmanuals.net/csound/methods-of-writing-
csou...](http://write.flossmanuals.net/csound/methods-of-writing-csound-
scores/)

[3] [https://csound.com/frontends.html](https://csound.com/frontends.html)

EDIT: remove some repetition

------
matt2000
I don't know much about this, but sure did like seeing this in the
description:

"One of the main principles in Csound development is to guarantee backwards
compatibility. You can still render a Csound source file from 1986 on the
latest Csound release, and you should be able to render a file written today
with the latest Csound in 2036."

I wish more "modern" languages and frameworks put in the effort on this point
as well.

------
kitotik
I think most have moved on to supercollider or puredata.

A proprietary but much more approachable and functional offshoot is Max/MSP.

~~~
kristopolous
Have you used it? I usually don't use proprietary software but I've been
thinking of giving it a go. Supposedly bitwig has a language as well but I
haven't used it either

~~~
DonHopkins
Miller Puckette originally created Max at IRCAM (Institut de Recherche et
Coordination Acoustique/Musique) in 1985, which is now marketed by Cycling
'74.

Then he later developed the open source Pure Data (PD) in the 90's, which also
included real time signal processing features (flowing streams of high
frequency audio over wires whose samples are at a much higher rate than the
frame rate of the visual program's control signals).

Max/MSP and PS are both visual data flow programming languages, where data
flows along wires between icons, but some data (like audio stream samples)
flow much faster than others (like messages, logic, and control signals).

Cycling '74 Max later adapted Puckette's work on Pure Data and called it
"Max/MSP", which stands for both "Max Signal Processing" and his initial
"Miller Smith Puckette".

[https://en.wikipedia.org/wiki/Miller_Puckette](https://en.wikipedia.org/wiki/Miller_Puckette)

[https://en.wikipedia.org/wiki/IRCAM](https://en.wikipedia.org/wiki/IRCAM)

[https://en.wikipedia.org/wiki/Pure_Data](https://en.wikipedia.org/wiki/Pure_Data)

[https://en.wikipedia.org/wiki/Max_(software)](https://en.wikipedia.org/wiki/Max_\(software\))

>Max is named after composer Max Mathews, and can be considered a descendant
of his MUSIC language, though its graphical nature disguises that fact. Like
most MUSIC-N languages, Max distinguishes between two levels of time: that of
an event scheduler, and that of the DSP (this corresponds to the distinction
between k-rate and a-rate processes in Csound, and control rate vs. audio rate
in SuperCollider).

~~~
jancsika
> flowing streams of high frequency audio over wires whose samples are at a
> much higher rate than the frame rate of the visual program's control signals

Pd's "control" classes are probably confusing to programmers coming from other
computer music environments. There's no control rate in Pd.

The "control" classes (the ones without a tilde at the end of their name) send
messages in zero logical time and may be triggered sporadically. Building
diagrams out of them is like building an immediate-mode Rube Goldberg machine.

There are some conversion classes that allow control <-> signal object
communication. Some of the control-to-signal classes do conversion at what you
could call "control rate"\-- e.g., they might compute a single value and copy
it to the rest of the samples for output. Others do sub-sample accuracy
bounded by the precision of floats (single or double as Pd can be compiled to
use either).

If you're weird you can use [bang~] to emulate a control rate diagram-- it
will literally output the "bang" message at each block which you can then feed
to a downstream Rube Goldberg machine in order to do arbitrary message passing
each block. But that's less ergonomic than just using signal objects which a)
all send the same type of vector data and b) get automatically ordered by the
graph builder and are therefore easier to read. (The irony being that signal
diagrams generally deal with DSP, so the readable part of the diagram is the
most conceptually complex and the simple stuff like branching or counting to
10 tends to end up looking like a plate of spaghetti.)

There's also overhead in the control message dispatch which would probably
undo any imagined efficiency gains. Signal diagrams on the other hand get
sorted into an array of function callbacks (as needed during runtime or
editing time), so there's no chasing of pointers or type-checking to eat up
cycles.

------
Lord_Nightmare
There was once another sort of 'competitor' to csound called cmusic, I believe
the last surviving unmaintained port of it can be found at
[http://yadegari.org/carl.html](http://yadegari.org/carl.html)

------
DonHopkins
The One Laptop Per Child (OLPC) project adopted CSound for use in its
interactive musical activities via Python.

[https://en.wikipedia.org/wiki/Csound#One_Laptop_per_Child_(O...](https://en.wikipedia.org/wiki/Csound#One_Laptop_per_Child_\(OLPC\))

>Csound5 was chosen to be the audio/music development system for the OLPC
project on the XO-1 Laptop platform.

[http://wiki.laptop.org/go/Csound](http://wiki.laptop.org/go/Csound)

Csound is the music and audio signal processing language originally developed
by MIT's Barry Vercoe and now expanded and maintained by a world-wide
community, as Free Software. Csound will provide audio services for the XO
computer. Csound is both a programming language and a sound synthesis engine.
Csound, as included in the OLPC project, can be used by Activities or directly
by children and teachers. It can be accessed in a variety of ways. In the XO
platform, two basic ways are provided:

Through the Python programming environment: eg. programmed in Activities.

Through its 'classic' command-line frontend, directly invoking it from the
Terminal activity.

Further information about Csound can be found on its official website:
[http://csounds.com/](http://csounds.com/). The canonical Csound sources and
multi-platform binaries are hosted by Sourceforge.

Activities

Csound Editor - view, edit and perform Csound files

[http://wiki.laptop.org/go/Csound:Csound_Editor](http://wiki.laptop.org/go/Csound:Csound_Editor)

Audio Loop Remixer - perform audio loops and apply a variety of effects

[http://wiki.laptop.org/go/Csound:Audio_Loop_Remixer](http://wiki.laptop.org/go/Csound:Audio_Loop_Remixer)

MIDI File Player - performs MIDI files using the donated General MIDI
soundfont

[http://wiki.laptop.org/go/Csound:MIDI_File_Player](http://wiki.laptop.org/go/Csound:MIDI_File_Player)

Instrument Player - a keyboard interface to play a variety of instruments

[http://wiki.laptop.org/go/Csound:Instrument_Player](http://wiki.laptop.org/go/Csound:Instrument_Player)

TamTam - Tam Tam uses Csound, but you would never know it as its interface is
designed to wrap the Csound engine with a child-friendly look and feel. This
excellent group of Activities allows kids to make sounds, make music, jam,
record and transform their voices in an intuitive way. TamTam Edit allows
students to patch together Csound's opcodes (modules) and teaches them all
about signals, synthesis, and synthesizers. TamTam Activities demonstrate well
how the power of Csound can be harnessed in the XO platform.

[http://wiki.laptop.org/go/TamTam](http://wiki.laptop.org/go/TamTam)

GregCsoundActivities.zip - A number of Csound Server-based activities
developed by Greg for Build 542 including a pretty cool Pitch-Tracker Bouncing
Ball Activity and a Pitch Reverse Game – both lot's of fun for kids, but not
currently supported by the latest builds and security models.

[http://csounds.com/GregCsoundActivities.zip](http://csounds.com/GregCsoundActivities.zip)

Pippy - Pippy uses Csound to help teach children the Python programming
language and to build XO Activities.

[http://wiki.laptop.org/go/Pippy](http://wiki.laptop.org/go/Pippy)

Step - A simple 8-note step sequencer that children will use to play music and
record their own loops for use in other sample-based activities. Step uses
csndsugui.

[http://www.thumbuki.com/xo/step.activity.zip](http://www.thumbuki.com/xo/step.activity.zip)

Funny Talk - An activity that children can use to record their voices with the
built-in microphone, and process them with effects such as reverb, echo,
chorus, etc. Funny Talk allows a child to save their manipulated voices as
soundfiles so that they can be used in other musical activities. Funny Talk
uses csndsugui.

[http://wiki.laptop.org/go/Csound:Funny_Talk](http://wiki.laptop.org/go/Csound:Funny_Talk)

[http://wiki.laptop.org/go/Csound_tutorials](http://wiki.laptop.org/go/Csound_tutorials)

[http://wiki.laptop.org/go/Csound_TOOTS](http://wiki.laptop.org/go/Csound_TOOTS)

------
DonHopkins
Here are some notes I wrote about some visual programming languages and real
time performance tools for music and video that I wrote to the LEV mailing
list in 2000 (and some additional notes and email I saved over the years).

[https://www.donhopkins.com/home/archive/visual-
programming/b...](https://www.donhopkins.com/home/archive/visual-
programming/bounce-notes.txt)

That link also includes some interesting discussion with Jaron Lanier about
visual programming language design.

Image/ine was a software instrument for realtime video manipulation and MIDI
processing from STEIM (Studio for Electro-Instrumental Music) in Amsterdam, by
Steina Vasulka and Tom Demeyer (1996-2001). It ran on a Mac, and you could
write plug-ins for it.

[https://steim.org/](https://steim.org/)

[https://en.wikipedia.org/wiki/STEIM](https://en.wikipedia.org/wiki/STEIM)

[https://v2.nl/archive/works/image-ine](https://v2.nl/archive/works/image-ine)

Hookup is a real time visual programming language for controlling MIDI and
playing music and rendering graphics, developed by David Levitt (who shared an
office with Miller Puckette at MIT), which also incorporated the Macromedia
Director MMP player plug-in (so it could read in Director files and play their
content under visual program control).

[http://www.sdela.dds.nl/sfd/isadora.html](http://www.sdela.dds.nl/sfd/isadora.html)

>Mark Coniglio: Here's a bit of history. In 1986 my soon-to-be mentor and
Interactor collaborator Mort Subotnick had just come from a residency at MIT
where he was using a program called Hookup created by a student there named
David Levitt. Hookup was the first program I know of that used the "patch-
cord" metaphor, i.e., modules that manipulate data are linked by virtual
wires, the connection of which is determined by the user. For those in the
world of early analog, patch-cord programmed synthesizers, this was a familiar
interface. Mort was using David's program to do tempo following of MIDI
instruments -- this allowed him to lock hardware MIDI sequences to the tempo
of the live performers. I was a composition student at CalArts at the time,
and word had gotten around that I was a good programmer. So Mort contacted me
to see if I could hardcode some of the ideas he had implemented in Hookup on a
Mac, so that he could use them in his next performance. That program (used in
Mort's 1987 multimedia work "Hungers") would eventually become Interactor.
Mort designed the functionality of the early versions, but I became more
influential in the design as time went on. [...]

>Mark: Yes, that's true and importantly a kind of creative intuition was
creeping back in through the development of these new visual interface
possibilities for software. Part of the thing I reacted to in Hookup was the
way you could easily drop modules into the program and try things; a lot like
you could do with the patch-cord synthesizers. I may not have realized it
explicitly then, but this ability to program improvisationally allowed for
that kind of artful playfulness that is so important. So I set out to make a
similar user interface for Interactor. The creation of Isadora was a natural
outgrowth of Interactor. In 1996 Troika Ranch had a two-week residency at
STEIM, where I first saw Tom Demeyer's real-time video processing program
Image/ine. I first started using Image/ine in concert with Interactor, because
Image/ine didn't allow the kind of complicated interactive decision making
that I was used to having in Interactor. So, Interactor would process the MIDI
data from my interactive sensors, and then tell Image/ine what to do. By 1998
I was using Image/ine in a major way in my performances with Troika Ranch.
[...]

>Mark: Isadora and Max both inherit the modules linked by the patch-cord
metaphor from Hookup. But unlike Max, each Isadora module shows the parameter
names and current values for all of its inputs and outputs, and many modules
give real-time graphic feedback about their operation. This is important from
the perspective of helping new users understand what's going on right away.
But perhaps the biggest difference is that Max is a very powerful, open-ended
programming language in which you could solve any number of problems. Isadora
isn't that. It is a lot like Interactor in that each module is essentially a
macro that accomplishes some specific function. This approach helps people who
are just beginning to do this kind of work, as it means that useful
functionality is already embodied for you and it's very easy to start doing
things and getting interesting results quickly (like with Image/ine). Max
allows the most flexibility, but may be somewhat more difficult to program
because more things have to be built up from scratch. Isadora offers somewhat
less flexibility, but is still open-ended enough for the user to imprint his
or her aesthetic on the result.

While working at VPL, David also integrated the MMP library into Body Electric
(below) to make Bounce (also below). The MMP player plug-in is what eventually
became Macromedia Shockwave once it was plugged into the web browser (which
wasn't nearly as fun as plugging it into a full fledged real time interactive
visual programming language).

Body Electric is a real time visual programming language for VR and music and
hardware control, developed at VPL by Chuck Blanchard, which Jaron Lanier and
others used to create virtual reality simulations and virtual interactive
musical instruments.

[http://www.jaronlanier.com/vpl.html](http://www.jaronlanier.com/vpl.html)

[https://wiki.c2.com/?JaronLanier](https://wiki.c2.com/?JaronLanier)

[https://www.vrs.org.uk/virtual-reality-profiles/vpl-
research...](https://www.vrs.org.uk/virtual-reality-profiles/vpl-
research.html)

[https://web.archive.org/web/20050228021115/http://www.well.c...](https://web.archive.org/web/20050228021115/http://www.well.com/user/jaron/vr.html)

[https://web.archive.org/web/20040414174418/http://www.well.c...](https://web.archive.org/web/20040414174418/http://www.well.com/user/jaron/instruments.html)

[https://web.archive.org/web/20050211182929/http://www.well.c...](https://web.archive.org/web/20050211182929/http://www.well.com/user/jaron/knittalk.html)

Body Electric supported all kinds of interesting input and output devices,
including MIDI, sending and receiving UDP packets over Ethernet, loading
Swivel3D 3D skeleton files and animating them, sending their state over the
network to a pair of SGI workstations for rendering with the Isaac rendering
engine to the VPL "EyePhones" VR headset (one SGI workstation per eye, with a
Mac to run the simulation), VR input devices like VPL's DataGlove and Body
Suit, 3D input devices like the Ascension Flock of Birds, Polhemus, and
Spaceball, 3D audio output devices like the Convolvotron, and lots of other
cool stuff.

[https://est-kl.com/manufacturer/ascension/flock-of-birds.htm...](https://est-
kl.com/manufacturer/ascension/flock-of-birds.html)

[https://polhemus.com/](https://polhemus.com/)

[http://www-cdr.stanford.edu/DesignSpace/sponsors/Convolvotro...](http://www-
cdr.stanford.edu/DesignSpace/sponsors/Convolvotron.html)

[https://www.researchgate.net/publication/253921765_The_Convo...](https://www.researchgate.net/publication/253921765_The_Convolvotron_Real-
time_demonstration_of_reverberant_virtual_acoustic_environments)

Bounce is a derivative of Body Electric, that David Levitt integrated with the
MMP player, and that I helped him develop, and used for some fun projects.
Extremely weird and esoteric, but still one of the must productive, delightful
visual programming languages I've used!

[https://medium.com/@donhopkins/bounce-
stuff-8310551a96e3](https://medium.com/@donhopkins/bounce-stuff-8310551a96e3)

[https://wiki.c2.com/?BounceLanguage](https://wiki.c2.com/?BounceLanguage)

~~~
jcelerier
I'd be interested to get your opinion on [https://ossia.io](https://ossia.io)
:)

------
DonHopkins
Here's an interview with Aaron McLeran, who has done a lot of work with
CSound, and collaborated with Brian Eno on the procedural music in Will
Wright's "Spore" computer game at Maxis, using Pure Data (PD).

Immersive Audio Podcast Episode 7 Aaron McLeran

[https://podcasts.apple.com/ie/podcast/immersive-audio-
podcas...](https://podcasts.apple.com/ie/podcast/immersive-audio-podcast-
episode-7-aaron-mcleran/id1360242294?i=1000408230767)

[https://soundcloud.com/user-713907742/immersive-audio-
podcas...](https://soundcloud.com/user-713907742/immersive-audio-podcast-
episode-7-aaron-mcleran)

>In today’s episode Oliver was joined via Skype by Aaron McLeran, Lead Audio
Programmer at Epic Games. Aaron’s first taste of audio programming was writing
computer music in CSound while in graduate school at University of Notre Dame
(when he was supposed to be doing astrophysics research). Realising his true
calling, he left physics to study procedural and interactive computer music,
audio synthesis, and audio analysis with Dr. Curtis Roads at the University of
Santa Barbara. His first game audio experience was writing procedural music on
Spore where he got to collaborate with Brian Eno and Maxis’ audio director
Kent Jolly on writing much of the game’s truly procedural music. His next game
audio gig was a sound designer on Dead Space 2 where he wrote much of the
games interactive audio systems in Lua along with accomplished audio director
Don Veca. He made the leap from technical sound designer to audio programmer
at Sledgehammer Games where he worked on Call of Duty: Modern Warfare 3 and
Call of Duty: Advanced Warfare. His next audio programming gig was at ArenaNet
where he got to wrangle with the unpredictability and scale of game audio in
the context of an MMO and developed some pretty cool tech around for player-
created music and musical interaction. He’s currently working on a new multi-
platform audio mixer backend for UE4 and developing new tech and approaches to
game audio for VR.

>Aaron speaks to Oliver about all things Game Audio and Procedural Audio and
his unusual entry into the industry.

GDC Vault: Procedural Music in SPORE, with Kent Jolly, Aaron McLeran

[https://www.gdcvault.com/play/323/Procedural-Music-
in](https://www.gdcvault.com/play/323/Procedural-Music-in)

MAKE YOUR OWN KIND OF MUSIC IN 'SPORE' WITH HELP FROM BRIAN ENO (LISTEN TO
THIS)

[http://www.mtv.com/news/2456432/make-your-own-kind-of-
music-...](http://www.mtv.com/news/2456432/make-your-own-kind-of-music-in-
spore-with-help-from-brian-eno-listen-to-this/)

THE BEAT GOES ON: DYNAMIC MUSIC IN SPORE: Audio engineers Kent Jolly and Aaron
McLeran unveil Spore's procedural music generation.

[https://www.moredarkthanshark.org/eno_int_gspy-
feb08.html](https://www.moredarkthanshark.org/eno_int_gspy-feb08.html)

Will Wright and Brian Eno - Generative Systems

[https://www.youtube.com/watch?v=UqzVSvqXJYg](https://www.youtube.com/watch?v=UqzVSvqXJYg)

Pure Data

[https://en.wikipedia.org/wiki/Pure_Data#Projects_using_Pure_...](https://en.wikipedia.org/wiki/Pure_Data#Projects_using_Pure_Data)

>Projects using Pure Data

>Pure Data has been used as the basis of a number of projects, as a
prototyping language and a sound engine. The table interface called the
Reactable and the abandoned iPhone app RjDj both embed Pd as a sound engine.

>Pd has been used for prototyping audio for video games by a number of audio
designers. For example, EAPd is the internal version of Pd that is used at
Electronic Arts (EA). It has also been embedded into EA Spore.

>Pd has also been used for networked performance, in the Networked Resources
for Collaborative Improvisation (NRCI) Library.

