Hacker News new | past | comments | ask | show | jobs | submit login
Musical Notation for Modular Synthesizers (perfectcircuit.com)
154 points by bschne 9 months ago | hide | past | favorite | 69 comments



Since my master's studies, I have been researching this topic. I highly recommend Professor Thor Magnusson's book, "Sonic Writing" (https://www.bloomsbury.com/uk/sonic-writing-9781501313868/), as well as all of his research.

For example, in this article, he discusses algorithms as "Algorithms as Scores" (https://cris.brighton.ac.uk/ws/portalfiles/portal/268697/Mag...).

These concepts have profoundly influenced my creation of Glicol (https://glicol.org/).


I'm a big fan of Glicol, my main issue with it is the lack of documentation. There's some great getting started guides, but beyond that not too much (for example, an example for glicol-cli is provided using the built in BD drum, but I can't find any reference for all drums available).

As I say, it's otherwise perfect for me, it's sound design capabilities are really fantastic


How does Glicol differ from Supercollider (https://github.com/supercollider/supercollider)?


There are indeed some similarities. I was influenced by SuperCollider (SC). One of my master's graduation projects was a system based on SC: https://github.com/chaosprint/Packing. Both SC and Glicol are written in low-level languages—SC in C++ and Glicol in Rust. Inspired by SC's reusable scsynth, I created glicol_synth as an independent audio library.

However, their syntax differs greatly. Glicol's syntax is designed for live coding, prioritizing simplicity and readability and it's actually partially inspired by modular synth(for example: https://glicol.org/demo#minitechno and https://glicol.org/tour#fm), while SC's syntax inherits from Smalltalk, adhering to standard OOP. SC's ecosystem is mature, offering GUI development, various sound processing methods, and robust multi-channel support. In contrast, Glicol's sound processing is still very minimal and experimental. Once the basic architecture, like multi-channel and audio graph handling, is established, adding sound processing modules will be linear.

Additionally, Glicol's support for both browser and CLI is a highlight.


Roland's Practical Synthesis for Electronic Music, Volume 2 starts with a discussion of modular notation. To me, it's very well thought out.

https://reaktorplayer.wordpress.com/wp-content/uploads/2018/...


This is likely to be personal incompatibility with Roland’s style, but I’ve always found Roland user interface designs to be inscrutable banks of deep menus, arbitrary buttons, tiny screens, and “wizard” style branching between wholly disparate aesthetic presets that change everything at once. I have never been able to intuit and memorize Roland gear well enough to customize according to my own mental models. The idea that anything Roland would be held aloft as “well thought out” by somebody is baffling to me.

Roland’s popularity and success speaks for itself so there’s no need to prove anything. Their UIs are just completely at odds with my personal creative flow.

Back in the late 90s I mixed an entire album on a VS-880 (with good results, it sold a couple thousand copies and got decent college radio airplay) so I’ve definitely given them a chance and gotten the full experience — but that damn thing fought me relentlessly. None of the motions I learned stuck with me.

I feel the same way about a lot of hardware electronic gear (I was utterly defeated by an Akai sampler around the same time). The exceptions are the interfaces that hew to a consistent theoretical model and emphasize tweaking of individual parameters rather than selection of presets. I was highly productive within the Logic Audio environment; I had good impressions of the Nord Modular and its GUI software.

Any Roland-style modeling of modular synthesis is likely to leave me behind. Glad it works for you and many others, though!


The new Roland technical strategy does not speak to me either.

The book is from Roland’s 100M modular system and written in the late 1970’s or early 80’s. That version of the System 100 was menuless, entirely analog, and patched with some basic normals and wires. I came across the book because my Eurorack is mostly Behringer System 100 modules because they are cheap as am I.

===

I picked up a VS880 for about $100 last year and use it as my primary multitrack recorder because I prefer not to use general purpose computers for creative work anymore and it’s a hobby. I have a ZuluSCSI plugged into the back and record to an SD card on that.

This year, I made a commitment to learn how to use it…not much point in selling it…and do more recording. What I like about it is not the menus. It’s that being the first in the series, the design brief seems to have been a better tape machine and alternative to ADAT. Later models were competing with DAWs.

Because the VS880 is fanless, I can leave it on all day while I do other things (e.g. take a nap). Of course it works for me because my musical ambitions are very unambitious.


System 100 is a _great_ design to learn on. Nomenclature and patching strategy is pretty close to what we do today (vs Moog 55 series which is very... Moog-ey) and it includes quite a few conveniences like inverted outs on envelopes without getting in the way.

Aside from using a similar visual design language it's hard to believe it's the same company that makes Roland's contemporary menu divers.

And the Behringer modules are dirt cheap. I think a few of the utilities are like $50 new.


In fairness to Roland, you can buy a ten hp Eurorack module with deep menus…and pay a premium price because, you know, it’s Eurorack.

Anyway, most Behringer modules can be had used for $50 or less with some patience. They are not investments in anything other than happiness.


You're talking about post-D50 Roland. Before the D50 Roland made simple analog synths with very intuitive interfaces.

After the D50 - and especially after the DX7, which started the one-slider-panel 80s trend - synth interfaces started to sprout two-line LCD panels.

Most Roland UIs today are pretty horrific.


> I’ve always found Roland user interface designs to be inscrutable banks of deep menus, arbitrary buttons, tiny screens, and “wizard” style branching between wholly disparate aesthetic presets

This also describes the Akai MPC world.

I suspect it describes any system that has accumulated sufficient lineage to always be in tension between new features, existing technology, standard practice and legacy user expectations.

Synthesizer manufacturers are expert at repackaging modules, hardware and software - their business model can't handle the costs of blank slate development for each new product... Baffling UX often results.


I’ve been thinking about buying an MPC 1, so I have been thinking about the MPC workflow and what it would take to learn it.

A few years I figure to get started. That’s unreasonable for a fart app and entirely reasonable for a musical instrument. Pianos and cellos require time to learn and so do MPC’s.

The MPC UX is clearly designed for musicians committed to the instrument. Sure you can play on-pitch notes more easily than with a violin, but you will need just as much practice to get to Carnegie Hall.

At its core, the MPC is a B2B product. The UX is designed with a full understanding it is not for everyone. No musical instrument is.


Legacy warts aside (16 Levels, the underdeveloped step sequencer) and incomprehensible omissions notwithstanding (tons of quality of life improvements, easily within the device's ability but neglected), the MPC sequence-based workflow was fine for me because I learnt it without prior knowledge of any other workflow... But now that I have tasted a scene-based structure with Roland ZenBeats, I understand why scene-based structure is currently dominant among DAWs... But still, having an instrument that feels like an instrument is so nice that I'm still happy to do without the open-ended complexity of DAWs... And there is usually always a MPC hack to do anything !


The main reason I am planning to buy an MPC 1 is to use it as a sampler. Over the years, maybe it will become the center of my contraption. Maybe it won't. Theoretically, a hardware DAW that is happy not connected to the internet is something I want and the MPC's are well-sorted. Well-sorted is what I want more than easy to use. Like a racecar.

As an aside, I realize I can think about my MC-303 and MC-505 in terms of a scene based workflow.


this is why I like Arturia gear. It's simple but you can play live music on it without having to dive into a menu on anything. Even if I'm not using their instruments I dont think I'll ever be able to get away from the keystep.


Even classical music has a similar 'problem'... performances of the 'same' work can be very different. (And I'm sure that would be true even if the same instruments and same musicians are used.) Most of the differences come from different conductors/ensembles. (Orchestas also have 'traditional performances' that they'd rather cling to, and may resist conductors (often younger ones) who attempt any 'wild-haired' differences.)

Music's very fluid. Recordings of performances are probably as close to 'same' as we can get. Non-musicians may prefer to listen to these 'same' performances over and over for the 'fidelity' of their experiences. But it might stop them (as they age) from discovering better ones.

If MIDI is used to create the dynamics, 'patches' and all other modulations, oscillations, tempos, et.al of a synth or ten, then something close to 'same' might be approachable. But that means, oh, say, ten times as much work.

Depends on the listener whether that's a good thing. In my experience, a superior cover of a single recording (same or different singer, different producer) can turn a OK single into a damn! single.

Edit: (Case in point: 'Major Tom (Coming Home)' 1983 version: https://www.youtube.com/watch?v=wO0A0XcWy88 )


I think this is a good insight.

When designing a notation, you're choosing what parts of a work you consider to be essential (notated) and non-essential (non-notated). That is itself an artistic choice. Everything you choose to not represent in the notation is something that you are letting future performers or cover artists choose for themselves.

The notation serves as sort of a negotiated dividing line between what the original artist wants to claim for themselves and consider a required part of "the work" versus what future artists are allowed to participate in.

Given that modular music is rarely played by anyone except the original author, I suspect that standardized notation isn't very important. All that really matters is each author having their own private shorthand so they can recreate what they care to recreate.


Where to improvise is always up to the performer. Even a precise score can be reinterpreted creatively, it is nothing but a hint.


True today! In classical orchestra performances, that usually required the composer to insert a solo 'cadenza' section where the soloist was free to be expressive without a conductor 'click-track'.

Today, bands (esp. jam bands like the Dead) might agree to back up anyone who decides to go 'off-plan'. (These days, we can even take big samples of someone else's tracks (if they agree) and do 'covers' that are an entire re-working of their originals.)


This seems like a somewhat solved problem. Other domains - like CGI - use node networks that show active parameter values. There isn't a lot of space between a node network and a modular synth patch.

The big difference is that you can save and load patches in a node editor, but you have to rebuild everything by hand on a modular. Even then you're never going to reproduce panel settings exactly.

Some people find this appealing, but for me it's the main reason I stopped using my big modular and changed to Cherry/Softtube/VCV Rack.

It's also true that if you're synth-literate you should be able to recreate many patches by ear. There isn't usually that much going on, so it looks a lot more complex on paper than it really is. Things get more complex if you're using modules that play samples or do something exceptionally unique, but even then you can usually get in the ballpark - if not exactly, then close enough for something that works aesthetically.

The musical part is a different problem. You can scribble graphic scores, but they're far too crude to represent anything beyond the vaguest hint of what's going on.


Notated organ music has had the "modular synthesizer" problem for centuries. The solution that organists chose was to just write stuff down in front of the score. It would probably be a lot more clear than any of the suggestions in the article to use a node network and a short text description of the settings of each node.

People also need to let go of the idea that written music is about conveying how to create an exact reproduction of the original sound. That's what a recording is for. A musical score conveys the scheme under which to produce sound, and notates the important characteristics of the sound to produce. Everything else is intentionally left up to the performer.

If you don't believe that is valuable, you don't need to use it as a tool. However, it allows future musicians to both understand what you were thinking and put their own spin on your work.


I attempted to write a program once to solve for how annoying it can be to notate modular patches.

It's essentially a small DSL that can produce graphviz charts of patches. There have been other attempts to do this kind of thing, but they rely on the writer to describe their modules, which makes it quite tedious. I wanted to have a 'library' format that would allow people to specific module interfaces once, and then they could be imported.

I got a basic prototype working in Perl if anyone is interested, but never got around to really polishing it up and writing a bunch of 'libraries' for different modules.

https://git.spwbk.site/swatson/modmark

Interested if anyone knows of / has written something better.


Interesting problem to think about. The beauty of modular for me has always been that you can take voltage from literally anywhere and use it for CV. Modern modules also have an insane variety in controls and control surfaces, even for standard things like VCOs you have a ton of variety and featuresets. Saving the patch state is one thing but actually notating is crazy. Like I can't imagine someone being able to read this notation and play it accurately like someone sight reading a piano piece. You'd surely require familiarity with the setup ahead of time. As for recording it for posterity, being verbose and describing what you're doing in full works, I guess.


Didn't pd basically borrow spice's circuit layout language for the way it textually stores patches? I know it exists and it's so brittle that nobody ever, ever, edits it, but maybe there's a way to make it a little more resilient and editable?


PD is the non commercial end of Miller Puckettes(and others) contributions to Max/MSP, but with soooo much more.


This is true, but doesn't really get to the question of how it serializes modular synthesizers


A textual notation for rich and generic graphs can only become more "resilient and editable" by giving nice names to entities and by keeping documents small and hierarchical.


Not true at all; parsing matters. Json and yaml (and s-expressions and...) are equivalent but yaml having significant whitespace makes it mangleable in a way that the others aren't.


I’m both fascinated by notation attempts for modular, and find them refreshingly useless.

I’ve been playing with modular synths for over 25 years. One of my favorite parts is the ephemeral nature of patching. A bump of a knob or the nature of unsynced elements can quickly make actual recall of a larger patch impossible. Due to heat or other variability I’ve had patches change on me over 45 minutes of no one touching them. In a world of digital recall and perfection; this really speaks to me. Immediacy can be relished. It is now or never


I saw Morton Subotnik at my college and he spoke of phase shift due to heating of poorly design VCOs in a serendipitous way rather than something to be feared.


Phase shift?? Frequency shift, surely.


lol, yes! Thank you!


I have an easier “solution”

just pretend PCM is notation and that’s that. problem solved

the new problem introduced, however, is that now a “musician” (interpreter) is just a wav player

and a musician creator (singer songwriter, or composer, or producer of some sort) must choose zero or one way too many times in order to “write” one track.


Another example was put out by the DIY synthesizer kit company PAIA [1]. They had a patch notation system in which control voltages and connections were represented vertically and sound signal flow and processing was represented horizontally. The system was presented in a small booklet called The Source, which I have been unable to locate online, except for a photo of it with a PAIA synth [2].

The Source diagrams resemble the Figure 4 example by Allen Strange in TFA.

[1]. https://paia.com/

[2]. https://www.matrixsynth.com/2012/02/1981-paia-4700-modular.h...


That sounds a bit like the patch cards for the EMS VCS3; you puhed pins into the square patchboard to connect-up the modules. If you mounted a piece of card over the patchboard and pushed the pins through that, the card became a record of your patch.

Of course, you couldn't "read" the card and guess what it souned like; the patchcard didn't record the settings of the knobs either.


I sometimes wonder how western music notation would look like if it were designed from scratch today with all the knowledge we have today.

It feels like the current system is the Imperial system and somewhere ought to be the SI metric system.


> It feels like the current system is the Imperial system and somewhere ought to be the SI metric system.

Any reason for that beyond “feeling?” And are you very deeply familiar with how the current system resolves differences between the single chromatic and 12 diatonic scales?

Full disclosure here: I’m a pianist who can generally sight read nearly anything (that’s well-edited) put in front of me, to the point of having music directed opera and theater for a living for a while, rather than using my CS and math degrees immediately after earning them from UNC.

I’ve also created an app, BeatScratch (https://beatscratch.io) that, among other things, attempts to resolve some of the challenges of editing (which I’d argue, again, really involves understanding how musical notation maps chromatic and diatonic scales between each other - ie “picking the best sequence of flats and sharps automatically to maximize readability”).

To me, a lot of these “musical notation is outdated” arguments seem the same as “there will soon be no need for programmers; AI can write all our code,” or those arguments insisting some “visual programming” paradigm should replace languages like Rust and Python.

That is to say… they’re arguments made by people who don’t really understand why the existing systems work the way they do and have done so successfully for decades (or centuries).


It also seems to me that in practical terms, the amount of changing of synthesis parameters in the midst of a song is generally not that great and it’s not like we haven’t had the similar issue of notating pipe organ stops for a few centuries.

For that matter, it’s not like there aren’t centuries of alternative notation systems, whether it’s tablature, figured bass, or neume notation.


Usually, those are notated just with text placed either above the staff or between the staff lines (for a single organ stop). The normal stops only take a few characters to notate. More modern music for more complex organs will include a small amount of text indicating when to change between stop configurations or sets of manuals that are preset.

I can't imagine it would be that hard to notate such things for electronic music. Good old text sitting next to your good old staff works wonders.

The bigger issue for electronic music, IMO, is the notation of sounds that don't fit on a staff well. Modern classical composers (who use sheet music extensively in the creative process) have various notations for these sorts of things.


My comment was much more of a philosophical question than a hard criticism on the current system.

Yes I know why it is the way it is. But if you read your comment and your credentials you'll get the gist of what I'm thinking.


Do you think that your understanding of the current notation has influenced the way you think about music subconsciously? I assumed you learned the two together rather than in sequence.


It's the worst system, except for all the others.


Props for the Churchill democracy reference. And so true. This notation problem definitely exists for all synthesizers, not just modular ones, and probably at some level for all new electronic instruments (which are synths under the hood).


Wonder no more; there's been a few attempts. Here's some from a quick HN search: Hummingbird (https://www.hummingbirdnotation.com/), Clairnote (https://clairnote.org/), and a thread with experimental music notation resources (https://llllllll.co/t/experimental-music-notation-resources/...)


I love all these attempts because I read the introductions nodding my head in agreement—yes traditional notation is annoying, yes we can use software to make better notations, yes it should be intuitive—and then I get to their actual proposal and it's just as inscrutible as traditional notation, and, often, uglier.


It turns out that notating the production of sound is inherently a very information-theoretically hard problem.


I would beg to disagree with you. Many attempts have been made to replace music notation as we know it, and all have failed. In fact, more cultures around the world are adopting "western" music notation than ever before. It turns out that the expressiveness and information density of the notation are hard to match with any other system.

In turn, the notation that we use has become more flexible than ever. There are rich notations for microtonal music, various playing techniques, clusters of notes, and many more things. If you took a score by a modern composer (see Saad Haddad's pieces on YouTube) and showed it to JS Bach, it would be unrecognizable to him. Music notation is a living language just like English is.

By the way, I have been an on-and-off professional in the music world, although it is a "side gig" to programming, including some composing, tuning, and playing.

However, it is undeniable that music notation is a system made for power users. It's not an easy language to learn. The ideas of "information dense" and "expressive" should remind you of the ideas behind the A programming language.


Obligatory (long-form) watch if you're going to continue wondering about this:

https://www.youtube.com/watch?v=Eq3bUFgEcb4

"Notation must die" by Tantacrul


Today’s standard music notation has ecclesiastical roots. In part because there were incentives to standardize ecclesiastical performances. In part because for a few centuries monks were the Europeans most committed to writing.


Slightly offtop, but a cool [web-based livecoded modular synth](https://felixroos.github.io/kabelsalat/)


Less off-topic, and a cool [web-based] virtual modular synth: https://cardinal.kx.studio/

Cardinal runs natively, as a plugin in various formats, and in your browser. It is based on VCV Rack 2, but has a fixed (large) selection of libre-licensed modules.


> in your browser

"Exception thrown: Uncaught RangeError: Array buffer allocation failed" on Android Chrome.


Works here (same platform)


> It’s amazing that, through music notation, we know what the music of the medieval ars antiqua style sounded like over 800 years ago

More amazing is that anyone thinks we know that.


Archive link for anyone who like me is stuck in a "Verifying you are human..." loop (Cloudflare settings too aggressive?):

https://web.archive.org/web/20240714160259/https://www.perfe...


Tangentially related is the yearly Graphème publication from Smallest Functional Unit.

It's a curated collection of experimental music scores; part art magazine, part avant garde musical journal, and thoroughly delightful :)

https://smallestfunctionalunit.bandcamp.com/merch


Interesting that Nelson Goodmanis is mentioned. IMO music notation has the same, albeit lesser observable, problem as language and that's context. Regardless if it’s the Köln concert or a modular synth. If we try to capture all context for -let's say- the first second of a musical piece, it’s almost impossible to do.


What’s the context of your opinion? ;) By that I mean, what context do you see missing from music notation that should normally be there, and what would it accomplish? What is the goal behind capturing all context, and when is that goal important? Do have examples?


The author gives the example of the Köln Concert; we can buy scores to play the piece, but the score doesn't contain enough information to _really_ replicate the performance. That's what I meant by the missing context.


That’s a specific case of a piece explicitly and intentionally excluding the notes because it’s supposed to be improvised, right? The very point of the notation in that case is to avoid replicating a performance exactly. So what context is needed that’s missing? Why is replicating the performance exactly even a goal, when it’s not what the author wanted?


This is very interesting, however all of this only works if we also know how to build the instruments. Probably we have this knowledge for classical ones, but do we have it somewhere for hardware synths?


Very cool! Synths can 'move' in a bigger musical space than a traditional instrument: it's fascinating how musicians can still manage to convey a written account of the music.


One you drive your patch from a random source, the notion of score goes mostly out of the window—you just have the patch and you adapt to what comes out in the live.


Seems like whatever representation allows a daw to playback a synthesizer in time is the right notation. It’s algorithmic in nature.


Didn't Stockhausen create a notation for electronic music? I was surprised TFA didn't mention it.


coincidentally working on a related problem today, which is how to find if a procedural sequence of notes can be expressed as a function, or if there is a general technique for this.


Oh look my old discipline. I feel compelled to weigh in since my PhD research largely explored this topic albeit in virtual reality.

Innovations and discoveries here (including my own) seem largely pointless. It's a classic example of "you spent so much wondering if you could, you never stopped to think if you should".

The authors final section Does Notation Even Matter hits on the larger points I would make - ephemerality of voltage (tuning), difference in modular systems, etc., however it fails to make a strong case for the need to notate this kind of music apart from form(?).

It is disappointing to see these kind of regressive pursuits still enjoying any kind of popularity in avant garde music circles - they are experimenting with new and novel instruments, why would paper, out of all modern mediums available to the artist, be the best suited for notating this kind of music?


Perfect Circuit is a Los Angeles based retailer specializing in electronic instruments. It has practical interest in usage issues with modular synthesizers given its large base of professional customers in recording, film, and other creative industries.


bottom of the article has a section "Does Notation Even Matter?"




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: