Hacker News new | past | comments | ask | show | jobs | submit login
SuperCollider – A platform for audio synthesis and algorithmic composition (supercollider.github.io)
67 points by simonpure on May 23, 2020 | hide | past | favorite | 14 comments



For Clojure fans, there is [Overtone](https://overtone.github.io/), which uses the SuperCollider audio engine, but allows you to write more functional code and utilize REPL based development (within emacs!).


I've come across Overtone a few times and pretty much ignored it because I'm not a Clojure fan. But the Shadertone integration looks very appealing: https://youtu.be/kyL3xc7MzR0 I guess I should give this a go after all; thanks for the prod ;)

Tidal is another interesting project in that vein (though without the visual part): https://tidalcycles.org

The range of practical tasks for which this kind of tool would be my first choice is very narrow, but even if you have no clue what you'd do with it, it can be worth checking out for an afternoon just to experience a neat little declarative language that happens to go "beep boop [beep boop]" (yes, that is a snippet of valid code) as a side effect.

The documentation makes installation/setup quick and easy enough that it's worthwhile even if you have no other use for SuperCollider and just do it for the language tourism, using the included default sound bank. For example you could try code-golfing a rumba clave.


David Nolen coincidentally just tweeted re: SuperCollider.

https://twitter.com/swannodette/status/1263872131558977536


The SC3 syntax was designed with real-time audio workflows in mind. Though the language extensions for SuperCollider are nice, there’s real benefit to learning the actual language as well.


No knock on the original Super Collider language, sclang. That is what I originally learned and used.

For someone like me, who doesn't have much time to context switch when playing around with making music, sticking with Clojure, which I've settled on as my language of choice, is a no brainer.


Came here to post this.

I enjoyed playing with overtone very much a few years ago. Haven't been at it in while.

Are there any new learning resources or pieces of the ecosystem I should know about?


There's cl-collider (https://github.com/byulparan/cl-collider) for those interested in controlling the SuperCollider server with Common Lisp, which is a particular fruitful combination (DSLs, flexibility, etc).


>"scsynth, a real-time audio server, forms the core of the platform. It features 400+ unit generators (“UGens”) for analysis, synthesis, and processing. Its granularity allows the fluid combination of many known and unknown audio techniques, moving between additive and subtractive synthesis, FM, granular synthesis, FFT, and physical modeling. You can write your own UGens in C++, and users have already contributed several hundred more to the sc3-plugins repository."

The whole project looks unbelievably beautiful and useful -- but especially what I outlined above!

Remember that everything in audio has analogous aspects in Physics, Electronics, Electrical Engineering, etc., etc.

In other words -- I believe this has broader applications than audio alone...


I'm guessing that more than half of the functionality already exists in SciPy in some form or another.


Interesting thought. SciPy is obviously useful for modelling DSP stuff, but no idea how feasible it would be to use it for real-time processing, compared to something like Pyo (http://ajaxsoundstudio.com/pyodoc/).

I just tried searching "Jupyter audio live coding" for fun and it unearthed quite a few interesting results, but the real-time ones tend to involve SuperCollider or something similar. E.g. I discovered NSynth [0], which seemed like magic for a minute (they use Tensorflow to make a synthesizer, there's even an intrument for Ableton Live!) until I found out how they ‘cheat’ (pre-computing a wave-table for multisampling).

[0] https://magenta.tensorflow.org/nsynth-instrument


I suppose that many operations in DSP are just linear transforms. So you could pre-compute transforms using SciPy (and its wealth of available functions). And then you could have a pipelined version of the BLAS matrix-vector multiplication to make it low-latency. Possibly that could run on the GPU.


Yup, making it fast is no problem; I was just concerned about achieving low latency without dropping samples. The typical cookbook examples usually output to matplotlib or maybe an audio file, rather than straight to DAC, and I had amplified such search results by specifically looking for Jupyter examples (and also missed a lot of work that was done when it was still called IPython).

Upon digging a bit deeper it turns out using vanilla SciPy for real-time DSP is totally a thing.

Live-coding adds some additional demands regarding a different kind of latency — pre-computations need to happen as quickly as possible — but it seems feasible.


Interesting! Could you share some of the links that you came across in your exploration?


Basically I just skimmed through https://www.google.com/search?q=scipy+dsp+real-time and weeded out false positives that do use SciPy in some capacity, but also more specialised stuff such as the aforementioned Pyo. This mostly got rid of the music-related stuff (à la ‘what if guitar effect but Python instead of sclang’) and left me with e.g. Stack Overflow posts that simply confirm ‘yes, it's feasible’.

But I have two noteworthy links:

https://warrenweckesser.github.io/papers/weckesser-scipy-lin...

Particularly “Filtering a long signal in batches” on page 6, where it shows how to apply the Butterworth filter from the previous section to individual windows while preserving its state across invocations. As I'm familiar with NumPy, but very ignorant about scipy.signal, this was the ‘Bingo!’ moment for me :)

https://scikit-dsp-comm.readthedocs.io/en/latest/

> This allows in particular demodulation of radio signals and downsampling to baseband analog signals for streaming playback of say an FM broadcast station.

I didn't dig into where it does the heavy lifting for that (sample rates in the MHz range) — there may be some C/C++ involved. But the docs show some nice examples of how to do streaming audio DSP with NumPy, SciPy and PyAudio inside Jupyter:

https://scikit-dsp-comm.readthedocs.io/en/latest/nb_examples...




Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: