
SuperCollider – A platform for audio synthesis and algorithmic composition - simonpure
https://supercollider.github.io/
======
nanomonkey
For Clojure fans, there is
[Overtone]([https://overtone.github.io/](https://overtone.github.io/)), which
uses the SuperCollider audio engine, but allows you to write more functional
code and utilize REPL based development (within emacs!).

~~~
AndrewUnmuted
The SC3 syntax was designed with real-time audio workflows in mind. Though the
language extensions for SuperCollider are nice, there’s real benefit to
learning the actual language as well.

~~~
nanomonkey
No knock on the original Super Collider language, sclang. That is what I
originally learned and used.

For someone like me, who doesn't have much time to context switch when playing
around with making music, sticking with Clojure, which I've settled on as my
language of choice, is a no brainer.

------
trocado
There's cl-collider ([https://github.com/byulparan/cl-
collider](https://github.com/byulparan/cl-collider)) for those interested in
controlling the SuperCollider server with Common Lisp, which is a particular
fruitful combination (DSLs, flexibility, etc).

------
peter_d_sherman
>"scsynth, a real-time audio server, forms the core of the platform. It
features 400+ unit generators (“UGens”) for analysis, synthesis, and
processing. Its granularity allows the fluid _combination of many known and
unknown audio techniques, moving between additive and subtractive synthesis,
FM, granular synthesis, FFT, and physical modeling_. You can write your own
UGens in C++, and users have already contributed several hundred more to the
sc3-plugins repository."

The whole project looks unbelievably beautiful and useful -- but especially
what I outlined above!

Remember that everything in audio has analogous aspects in Physics,
Electronics, Electrical Engineering, etc., etc.

In other words -- I believe this has broader applications than audio alone...

~~~
amelius
I'm guessing that more than half of the functionality already exists in SciPy
in some form or another.

~~~
n3k5
Interesting thought. SciPy is obviously useful for modelling DSP stuff, but no
idea how feasible it would be to use it for real-time processing, compared to
something like Pyo
([http://ajaxsoundstudio.com/pyodoc/](http://ajaxsoundstudio.com/pyodoc/)).

I just tried searching "Jupyter audio live coding" for fun and it unearthed
quite a few interesting results, but the real-time ones tend to involve
SuperCollider or something similar. E.g. I discovered NSynth [0], which seemed
like magic for a minute (they use Tensorflow to make a synthesizer, there's
even an intrument for Ableton Live!) until I found out how they ‘cheat’ (pre-
computing a wave-table for multisampling).

[0] [https://magenta.tensorflow.org/nsynth-
instrument](https://magenta.tensorflow.org/nsynth-instrument)

~~~
amelius
I suppose that many operations in DSP are just linear transforms. So you could
pre-compute transforms using SciPy (and its wealth of available functions).
And then you could have a pipelined version of the BLAS matrix-vector
multiplication to make it low-latency. Possibly that could run on the GPU.

~~~
n3k5
Yup, making it fast is no problem; I was just concerned about achieving low
latency without dropping samples. The typical cookbook examples usually output
to matplotlib or maybe an audio file, rather than straight to DAC, and I had
amplified such search results by specifically looking for Jupyter examples
(and also missed a lot of work that was done when it was still called
IPython).

Upon digging a bit deeper it turns out using vanilla SciPy for real-time DSP
is totally a thing.

Live-coding adds some additional demands regarding a different kind of latency
— pre-computations need to happen as quickly as possible — but it seems
feasible.

~~~
amelius
Interesting! Could you share some of the links that you came across in your
exploration?

~~~
n3k5
Basically I just skimmed through
[https://www.google.com/search?q=scipy+dsp+real-
time](https://www.google.com/search?q=scipy+dsp+real-time) and weeded out
false positives that do use SciPy in some capacity, but also more specialised
stuff such as the aforementioned Pyo. This mostly got rid of the music-related
stuff (à la ‘what if guitar effect but Python instead of sclang’) and left me
with e.g. Stack Overflow posts that simply confirm ‘yes, it's feasible’.

But I have two noteworthy links:

[https://warrenweckesser.github.io/papers/weckesser-scipy-
lin...](https://warrenweckesser.github.io/papers/weckesser-scipy-linear-
filters.pdf)

Particularly “Filtering a long signal in batches” on page 6, where it shows
how to apply the Butterworth filter from the previous section to individual
windows while preserving its state across invocations. As I'm familiar with
NumPy, but very ignorant about scipy.signal, this was the ‘Bingo!’ moment for
me :)

[https://scikit-dsp-comm.readthedocs.io/en/latest/](https://scikit-dsp-
comm.readthedocs.io/en/latest/)

> _This allows in particular demodulation of radio signals and downsampling to
> baseband analog signals for streaming playback of say an FM broadcast
> station._

I didn't dig into where it does the heavy lifting for _that_ (sample rates in
the MHz range) — there may be some C/C++ involved. But the docs show some nice
examples of how to do streaming audio DSP with NumPy, SciPy and PyAudio inside
Jupyter:

[https://scikit-dsp-
comm.readthedocs.io/en/latest/nb_examples...](https://scikit-dsp-
comm.readthedocs.io/en/latest/nb_examples/Real-Time-
DSP_Using_pyaudio_helper_and_ipywidgets.html)

