
Live Coding in Sporth: A Stack-Based Language for Audio Synthesis [pdf] - aturley
https://iclc.livecodenetwork.org/2017/cameraReady/sporth.pdf
======
pierrec
These days I've been working on a side project based on Sporth. I found the
stack-based system is a really efficient way of representing audio graphs, and
if you know a bit of DSP you can get the sound you want pretty quickly. I
haven't even started using PolySporth, which seems like an important
feature/complement to the language, but I'm still having lots of fun with
plain Sporth. I wrote lots of Sporth scripts while learning the language,
here's a couple:

Drone spirit - code (this one has comments): [https://gist.github.com/pac-
dev/af6f7b7b1786bf102c88592183de...](https://gist.github.com/pac-
dev/af6f7b7b1786bf102c88592183de0f59)

audio:
[http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60...](http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60.r16.cf2.rackcdn.com/spirit.mp3)

Harmo - code: [https://gist.github.com/pac-
dev/f4c4cfdb1fb03bf6ede81aac4087...](https://gist.github.com/pac-
dev/f4c4cfdb1fb03bf6ede81aac40870586)

audio:
[http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60...](http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60.r16.cf2.rackcdn.com/harmo.mp3)

Monk voice - code: [https://gist.github.com/pac-
dev/76660a90406c4d8fe923fbb03094...](https://gist.github.com/pac-
dev/76660a90406c4d8fe923fbb030944df6)

audio:
[http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60...](http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60.r16.cf2.rackcdn.com/monk.mp3)

~~~
zebproj
These are quite beautiful. If you are willing, I would to add these examples
to the Sporth distribution.

To be honest, you're not missing too much with PolySporth. Actually, you're
better off pretending it doesn't exist for now ;) It was an experiment that
tried to add concepts like polyphony and note events in time. Despite the
serious time investments I put into it, I have never seriously used it. Maybe
someday I'll re-examine it and do something interesting with it.

Happy Sporthing,

-P

~~~
pierrec
Thanks! That's interesting, I've been unsure about the importance of
PolySporth. It seems important if you want to create generative compositions
in the traditional sense, by which I mean focusing more on the sequence of
notes rather than the timbre. But for me, I think I'll always return to manual
sequencing if I want to get this kind of traditional result. Whenever notes
are generated, I just want to manually re-arrange them. I guess that's one of
the reasons I still haven't learned PolySporth.

But if I want to create a piece where timbre is central (or if I want to work
on the timbral portion of a piece that has both), then programming is actually
a super effective tool, and that's where Sporth nails it for creative
exploration.

As for using my examples, yes, you can totally use and modify them with
attibution (I should probably add a proper licence :)

------
zebproj
I am the developer of Sporth. It's nice to see this project listed here. Happy
to answer any questions here or on GH.

\---

Here are some Sporth links, for those interested:

The main Sporth project page:

[http://paulbatchelor.github.io/proj/sporth](http://paulbatchelor.github.io/proj/sporth)

Sporthlings: a collection of Sporth compositional etudes:

[http://paulbatchelor.github.io/sporthlings/](http://paulbatchelor.github.io/sporthlings/)

Sporthlings audio as a youtube playlist:

[https://www.youtube.com/playlist?list=PLgEE92LPHEljTLN9gFZr2...](https://www.youtube.com/playlist?list=PLgEE92LPHEljTLN9gFZr21ETOBxQueoTQ)

The Sporth Cookbook: documentation on Sporth, as well as some analysis of
Sporth patches:

[http://paulbatchelor.github.io/proj/cook/](http://paulbatchelor.github.io/proj/cook/)

~~~
garb_
Hi! I'm trying to install Spigot and it looks like the github for Runt no
longer exists? Is it available elsewhere?

------
jarmitage
Saw Paul present this last year at the International Conference on Live
Coding, really cool project.

He has some more excellent work here:
[https://github.com/PaulBatchelor](https://github.com/PaulBatchelor)

A see also: [https://github.com/mollerse/ait-
lang](https://github.com/mollerse/ait-lang)

------
jkrukowski
I love it, I used it in my project
[https://bitbucket.org/jkru/synthesthesia/src](https://bitbucket.org/jkru/synthesthesia/src)

~~~
zebproj
cool! any video demos of this in action?

------
adamnemecek
Paul is a beast. His work is central to AudioKit
[https://github.com/AudioKit/AudioKit](https://github.com/AudioKit/AudioKit)

------
zphds
Here's another similar library for Clojure.
[https://github.com/overtone/overtone](https://github.com/overtone/overtone)

Recommend this talk that made Music 'click' for me. Also fun if you are trying
to read GEB and generate cannons.
[https://www.youtube.com/watch?v=Mfsnlbd-4xQ](https://www.youtube.com/watch?v=Mfsnlbd-4xQ)

------
mud_dauber
Yay! Somebody else has discovered Forth's principles.

~~~
unixhero
Where would one read about Forth's principles? Colour me intrigued.

~~~
macintux
I don't know the best resource, but Thinking in Forth should get you started.

[http://thinking-forth.sourceforge.net](http://thinking-forth.sourceforge.net)

~~~
jdmoreira
Starting FORTH might be a better introduction. It's also from Leo Brodie.

[https://www.forth.com/starting-forth/](https://www.forth.com/starting-forth/)

------
merongivian
Not entirely related to sporth, but i created a small dsl for live coding with
ruby in the browser:
[https://negasonic.herokuapp.com/](https://negasonic.herokuapp.com/)

------
jancsika
> There is no concept of a control rate signal, a traditional feature in other
> computer-music languages like Csound, SuperCollider, MaxMSP, or PD.

Hm... what is a control rate signal in Pd?

~~~
zebproj
In PD, the normal thin cables carry control-rate signals. The thicker cables
are the audio-rate cables. Objects like "osc~" and "+~" produce audio-rate
signals. Objects like "mtof" and "stripnote" produce control-rate signals.

~~~
ssfrr
I think the issue of control rate in PD is often confusing for folks, because
there isn’t any “control rate” that is fundamental to the language. There are
audio rate signals that are processed every block, and there are messages that
get handled ASAP when they are received, and are decoupled from the audio rate
graph. These messages could be once an hour or once per millisecond. The
control rate thing comes up because most PD objects (e.g. osc~) only update
their parameters on block boundaries, which creates timing jitter (variable
delay between when a message is received and when it takes effect). You could
definitely have an audio-rate object that took into account the precise timing
of the messages it received and scheduled the parameter changes a constant
amount of time in the future, which eliminates jitter at the expense of a
fixed latency. The vline~ object is useful for generalizing this approach to
any object that can take audio-rate parameters, but there’s no reason you
couldn’t make a vosc~ object that had the functionality built in.

This is different from supercollider (and I think sound), which have an
explicit control rate.

~~~
jancsika
> The control rate thing comes up because most PD objects (e.g. osc~) only
> update their parameters on block boundaries, which creates timing jitter
> (variable delay between when a message is received and when it takes
> effect).

Just to reiterate so that people don't get confused:

If you give _signal_ input to "osc~"\-- let's say `[noise~]--[osc~]`, then
"osc~" will update its frequency parameter _every sample_. If "osc~" didn't do
that it would be a lot cheaper (because you'd only need to do one calculation
and copy it 64 times at default blocksize), but you would of course
immediately hear the loss in quality.

What you are describing is what happens when you send a message containing a
_single floating point number payload_ to the input of "osc~." In that case Pd
treats that number as if it were a vector of samples all with that same value.
That's fast and cheap.

> there’s no reason you couldn’t make a vosc~ object that had the
> functionality built in.

You can do that. But it's interesting to dissect it a bit:

* "vosc~" would only see a performance bump over `[vline~]--[osc~]` in cases where you _aren 't_ sending a signal to the input. (Because if you are sending input, the code that does precision timing of control messages is wasted cycles.) So you'd probably want to make the input a control input that can't take signals.

* With "vline~", users can control the ramp by putting more objects in between "vline~" and "osc~". But with "vosc~" they'd be stuck either with the default linear interpolation, or a selection of interpolation schemes that you code into the object. In other words, potential functionality moves from userspace to compiled classspace.

Edit: remove duplicated thingy

------
andyonthewings
Is there any live coding video recording that demonstrates this? I am a noob
to live coding performance and I am curious how this work in practice.

------
markatkinson
Awesome! I just started getting into FM synthesis using the Korg Volca series,
and this seems like a really neat compliment to that as a developer.

