Hacker News new | comments | ask | show | jobs | submit login
Live Coding in Sporth: A Stack-Based Language for Audio Synthesis [pdf] (livecodenetwork.org)
83 points by aturley 9 months ago | hide | past | web | favorite | 37 comments



These days I've been working on a side project based on Sporth. I found the stack-based system is a really efficient way of representing audio graphs, and if you know a bit of DSP you can get the sound you want pretty quickly. I haven't even started using PolySporth, which seems like an important feature/complement to the language, but I'm still having lots of fun with plain Sporth. I wrote lots of Sporth scripts while learning the language, here's a couple:

Drone spirit - code (this one has comments): https://gist.github.com/pac-dev/af6f7b7b1786bf102c88592183de...

audio: http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60...

Harmo - code: https://gist.github.com/pac-dev/f4c4cfdb1fb03bf6ede81aac4087...

audio: http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60...

Monk voice - code: https://gist.github.com/pac-dev/76660a90406c4d8fe923fbb03094...

audio: http://007ee821dfb24ea1133d-f5304285da51469c5fdbbb05c1bdfa60...


These are quite beautiful. If you are willing, I would to add these examples to the Sporth distribution.

To be honest, you're not missing too much with PolySporth. Actually, you're better off pretending it doesn't exist for now ;) It was an experiment that tried to add concepts like polyphony and note events in time. Despite the serious time investments I put into it, I have never seriously used it. Maybe someday I'll re-examine it and do something interesting with it.

Happy Sporthing,

-P


Thanks! That's interesting, I've been unsure about the importance of PolySporth. It seems important if you want to create generative compositions in the traditional sense, by which I mean focusing more on the sequence of notes rather than the timbre. But for me, I think I'll always return to manual sequencing if I want to get this kind of traditional result. Whenever notes are generated, I just want to manually re-arrange them. I guess that's one of the reasons I still haven't learned PolySporth.

But if I want to create a piece where timbre is central (or if I want to work on the timbral portion of a piece that has both), then programming is actually a super effective tool, and that's where Sporth nails it for creative exploration.

As for using my examples, yes, you can totally use and modify them with attibution (I should probably add a proper licence :)


I am the developer of Sporth. It's nice to see this project listed here. Happy to answer any questions here or on GH.

---

Here are some Sporth links, for those interested:

The main Sporth project page:

http://paulbatchelor.github.io/proj/sporth

Sporthlings: a collection of Sporth compositional etudes:

http://paulbatchelor.github.io/sporthlings/

Sporthlings audio as a youtube playlist:

https://www.youtube.com/playlist?list=PLgEE92LPHEljTLN9gFZr2...

The Sporth Cookbook: documentation on Sporth, as well as some analysis of Sporth patches:

http://paulbatchelor.github.io/proj/cook/


Hi! I'm trying to install Spigot and it looks like the github for Runt no longer exists? Is it available elsewhere?


Saw Paul present this last year at the International Conference on Live Coding, really cool project.

He has some more excellent work here: https://github.com/PaulBatchelor

A see also: https://github.com/mollerse/ait-lang


I love it, I used it in my project https://bitbucket.org/jkru/synthesthesia/src


cool! any video demos of this in action?


Paul is a beast. His work is central to AudioKit https://github.com/AudioKit/AudioKit


Here's another similar library for Clojure. https://github.com/overtone/overtone

Recommend this talk that made Music 'click' for me. Also fun if you are trying to read GEB and generate cannons. https://www.youtube.com/watch?v=Mfsnlbd-4xQ


Yay! Somebody else has discovered Forth's principles.


Where would one read about Forth's principles? Colour me intrigued.


I recommend playing with a postscript repl, either ghostscript or couple of the online ones.

http://logand.com/sw/wps/index.html#sec-1


If you go the PostScript route then check out the PostScript Language Tutorial and Cookbook (blue book). It is available in a lot of places on the net and quite a nice tutorial.


Great recommendation.

https://www-cdf.fnal.gov/offline/PostScript/BLUEBOOK.PDF

fnal.gov is the largest, most complete trove of postscript language material on the net.


I don't know the best resource, but Thinking in Forth should get you started.

http://thinking-forth.sourceforge.net


Starting FORTH might be a better introduction. It's also from Leo Brodie.

https://www.forth.com/starting-forth/


You beat me to it. Thinking in Forth is indeed the place to start.


Since the others gave you promotional material, I'll give you a write-up that discusses the good and weird sides of it:

https://news.ycombinator.com/item?id=1680149

The weird stuff makes a lot of people decide not to use it. So, they should know ahead of time. I do think there's potential for a Forth with less weird stuff or more compatible with C. There could even be one already given a benefit of Forth is it's easy to write interpreters for it. On that note, here's a tutorial series illustrating that:

http://blog.asrpo.com/forth_tutorial_part_1


I played around with Forth last year and found it very fun. Similar to Lisp it has a simple ruleset with which you can build up abstractions that almost look like a DSL. I'd say it sits somewhere between Assembly and C. It's like writing assembly but you don't have to worry about register assignment.


Was the pun on colorForth intended? Just in case it wasn't, colorForth was also created by the creator of Forth, and color has syntactic significance here.


Not entirely related to sporth, but i created a small dsl for live coding with ruby in the browser: https://negasonic.herokuapp.com/


> There is no concept of a control rate signal, a traditional feature in other computer-music languages like Csound, SuperCollider, MaxMSP, or PD.

Hm... what is a control rate signal in Pd?


"As with most DSP software, there are two primary rates at which data is passed: sample (audio) rate, usually at 44,100 samples per second, and control rate, at 1 block per 64 samples" (https://en.wikipedia.org/wiki/Pure_Data)

This is actually kind of annoying with Pd as it's not always clear what rate an object is running at.


Can you give an example?

I maintain a fork of Pd, and I'm still not clear what the Sporth author, that Wikipedia article, nor you are talking about.


OK, an example is trying to use threshold~ to sculpt a wave. You can't, because threshold~ is supposed to be a "control" object and runs at control rate.

https://forum.pdpatchrepo.info/topic/7906/control-as-signal-...


Ah, I see what you're talking about now.

There are indeed a handful of classes-- some in d_ctl.c-- (and probably externals, too) which take signal input and output some message like "bang" on block boundaries (by using a clock callback). Still, there's no "control rate"-- "threshold~" doesn't output data every block. It outputs data only when the threshold value is exceeded, quantized to block boundaries.

At least in my fork (Purr Data), the control outlets are visually distinct from signal outlets. So upon instantiating [threshold~] you can immediately see that it doesn't output signal data.


I've never used Pd or any of the others so I'm guessing here. Could it be the equivalent to an 'lfo input'. The rate at which it controls parameters?


That could be the implication, but it's not the way Pd works.

For example, when you make a signal connection (thick wires) in Pd you immediately get a signal "flowing" through that part of the diagram. It is similar to making a connection between two modules on a Buchla synthesizer. That is very useful for prototyping because you immediately get aural feedback as you connect things together. (It's also efficient because you are doing computation on vectors of data.)

That doesn't happen with control objects in Pd. When you make a control connection (thin wires) no function gets called until you somehow trigger an event that causes the object to output some data. That is very useful for prototyping because you can make arbitrary turing-complete scripts full of branches of heterogeneous data which only get computed sporadically.

I guess someone could write a library cloning the core DSP objects and have them do "control rate" computations. That is, have them do a single computation per block, and copy that value for the remaining samples of the block. I imagine no one has done that for the same reason Pd only has a single numeric (float) type-- it's good enough as is.


In PD, the normal thin cables carry control-rate signals. The thicker cables are the audio-rate cables. Objects like "osc~" and "+~" produce audio-rate signals. Objects like "mtof" and "stripnote" produce control-rate signals.


I think the issue of control rate in PD is often confusing for folks, because there isn’t any “control rate” that is fundamental to the language. There are audio rate signals that are processed every block, and there are messages that get handled ASAP when they are received, and are decoupled from the audio rate graph. These messages could be once an hour or once per millisecond. The control rate thing comes up because most PD objects (e.g. osc~) only update their parameters on block boundaries, which creates timing jitter (variable delay between when a message is received and when it takes effect). You could definitely have an audio-rate object that took into account the precise timing of the messages it received and scheduled the parameter changes a constant amount of time in the future, which eliminates jitter at the expense of a fixed latency. The vline~ object is useful for generalizing this approach to any object that can take audio-rate parameters, but there’s no reason you couldn’t make a vosc~ object that had the functionality built in.

This is different from supercollider (and I think sound), which have an explicit control rate.


> The control rate thing comes up because most PD objects (e.g. osc~) only update their parameters on block boundaries, which creates timing jitter (variable delay between when a message is received and when it takes effect).

Just to reiterate so that people don't get confused:

If you give signal input to "osc~"-- let's say `[noise~]--[osc~]`, then "osc~" will update its frequency parameter every sample. If "osc~" didn't do that it would be a lot cheaper (because you'd only need to do one calculation and copy it 64 times at default blocksize), but you would of course immediately hear the loss in quality.

What you are describing is what happens when you send a message containing a single floating point number payload to the input of "osc~." In that case Pd treats that number as if it were a vector of samples all with that same value. That's fast and cheap.

> there’s no reason you couldn’t make a vosc~ object that had the functionality built in.

You can do that. But it's interesting to dissect it a bit:

* "vosc~" would only see a performance bump over `[vline~]--[osc~]` in cases where you aren't sending a signal to the input. (Because if you are sending input, the code that does precision timing of control messages is wasted cycles.) So you'd probably want to make the input a control input that can't take signals.

* With "vline~", users can control the ramp by putting more objects in between "vline~" and "osc~". But with "vosc~" they'd be stuck either with the default linear interpolation, or a selection of interpolation schemes that you code into the object. In other words, potential functionality moves from userspace to compiled classspace.

Edit: remove duplicated thingy


> Objects like "mtof" and "stripnote" produce control-rate signals.

A theoretical "control-rate" "mtof" object would take its input value and compute a frequency value once every block when DSP is turned on.

But that's not what "mtof" does. Instead, it computes the frequency for an inputted MIDI value at the time it receives that value. That time could be once every block, once a minute, a single time when I load the program, at random intervals on Tuesday, or even never.

The thin line boxes constitute a kind of visual procedural scripting language. Kind of like shell scripting if pipes could have multiple prongs fanning out into multiple destinations.


> A theoretical "control-rate" "mtof" object would take its input value and compute a frequency value once every block when DSP is turned on.

In Sporth, the "mtof" unit-generator does a MIDI to frequency conversion for every audio sample, thus making it audio rate.

Perhaps it is better to think of control-rate signals as input signals rather than output signals. The "osc~" object, for instance, will update the frequency at every audio block. This would be the control rate. In Sporth, oscillator frequency values are updated every sample inside the audio block.

> But that's not what "mtof" does. Instead, it computes the frequency for an inputted MIDI value at the time it receives that value. That time could be once every block, once a minute, a single time when I load the program, at random intervals on Tuesday, or even never.

The important distinction here is that the resolution can't be smaller than the audio-block size, which in turn defines the control-rate.


> The "osc~" object, for instance, will update the frequency at every audio block.

You're just talking about sending a float from a control object to "osc~", right?

To be clear:

If "osc~" is receiving a float atom from a control object (thin line), then yes, it will implicitly convert that float to an input vector on block boundaries.

If "osc~" is receiving signal data from a DSP object (thick line), every sample of the input is used to calculate every sample of the output.

If "osc~" is receiving signal data from "vline~", the user can supple subsample accurate frequency changes to "osc~" bound by the precision of float precision, at the expense of performance.


Is there any live coding video recording that demonstrates this? I am a noob to live coding performance and I am curious how this work in practice.


Awesome! I just started getting into FM synthesis using the Korg Volca series, and this seems like a really neat compliment to that as a developer.




Applications are open for YC Summer 2019

Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: