Hacker News new | past | comments | ask | show | jobs | submit login
Subtractive synths explained (2011) (residentadvisor.net)
209 points by mr_golyadkin 21 days ago | hide | past | web | favorite | 76 comments



For those who are interested in this topic, I highly recommend checking out VCV Rack, an open source application that lets you build Eurorack-style virtual modular synthesizers: https://vcvrack.com/

I've been going down this rabbit hole lately and it's really fascinating.


This winter I went down this rabbit hole and back up, trying to approximate some acoustic instrument sounds. I went through a succession of different workflows, spending a couple weeks on each.

Workflow 1: feed a sawtooth oscillator through filters controllable with knobs. Eventually I realized that it will always sound mechanical, no matter how many filters you stack. That led to

Workflow 2: feed a sawtooth oscillator through a convolution reverb that uses a custom impulse response. For impulse responses, use random sounds downloaded from the internet (like wood strikes), or mixtures of existing instrument sounds. But that felt limiting, so I moved on to

Workflow 3: generate an impulse response wav file with Python, and use that in a convolution reverb to filter the sawtooth. This gave me some more interesting and configurable echoes, but then why start with an oscillator at all? So

Workflow 4: write code to generate the sound on the fly, as a sequence of samples. This way I can mimic some nice properties of physical sounds, like "the nth harmonic has initial amplitude 1/n and decay time 1/n". Also I can get inharmonicity, smearing of harmonics (like in PADsynth algorithm), and other nice things that are out of reach if you start with periodic oscillators.

If I could go back and give advice to my months-younger self, I'd tell me to skip oscillators and filters, and jump straight into generating the sound with code. You need to learn some math about how sound works, but then you'll be unstoppable. For example, here's a short formula I came up with a month ago for generating a pluck-like sound: https://jsfiddle.net/yd4nv5Ls/ It's much simpler than doing the same with prebuilt bricks.

The whole experience made me suspect that there's an alternative approach to building modular synths, based on physical facts about sound (as opposed to either starting with oscillators, or going all the way to digital modeling of strings and bows). It would be similar to physically based rendering in graphics: for example, it would enforce a physically correct relationship between how high a harmonic is and how long it rings, and maybe some other relationship about what happens to harmonics at the start of the sound, etc. But I'm not an expert and can't figure out fully how such a synth would work.


Subtractive makes a lot of classic synthesizer sounds, which is why it's a thing. It's also easy to understand.

It seems to be easy to implement digitally, but it really isn't, because a lot of the nuances and non-linearities that add weight and colour to synthesis with real electronics aren't present in simple digital emulations.

For pure DSP the choice is more or less between open additive, modal (which is a kind of constrained additive), AI-constrained additive, which is what Google have been playing with, and physical modelling, which is digital modelling of strings and bows.

If you want to "enforce a physically correct relationship" between etc you're going to want AI-constrained additive or physical modelling.

The aesthetics of all of this are a different topic altogether.


> If you want to "enforce a physically correct relationship" between etc you're going to want AI-constrained additive or physical modeling.

I was thinking of relationships like these:

1) Decay time of nth partial falls as a certain formula of n.

2) Frequency of nth partial is slightly different from n * fundamental, by a factor which is a formula of n.

3) Spectrum of nth partial isn't a delta function, but a hump whose width is a formula of n.

All these ideas come from physical effects, but you can use them to generate sounds directly, without any physical modeling or AI. My hunch is that could be more such ideas and they could play together nicely.


Do you mind emailing me?


> The whole experience made me suspect that there's an alternative approach to building modular synths, based on physical facts about sound

Yamaha experimented with this in the 90s with their VL series physical modeling synths, but it never caught on, mostly I think because if you want to have convincing results, you really need alternative midi controllers like a breath controller[1] for woodwind and brass intruments.

An alternative take on why physical modeling synths never really caught on is GigaSampler[2]. It was the first sampler (as far as I can remember) that could playback samples from hard disk, by only keeping the first second or so of the samples in memory. This made it possible to have sampled instruments where for example each key of a piano was sampled at various velocity/loudness values. Resulting in a sampled piano that could span multiple gigabytes. At a time where 128MB of RAM was still quite a lot, this was quite revolutionary. While physical modeling can produce convincing sounds with a potential expressiveness that no sample based instrument will ever match, it's base sound still doesn't sound as 'real' as a properly sampled instrument, recorded in a nice room with good microphones.

[0] simple overview, including some soft synth alternatives: https://www.musicradar.com/news/tech/blast-from-the-past-yam...

[1] example breath controller: https://www.akaipro.com/ewi5000

[2] Review of Gigasampler's successor: https://www.soundonsound.com/reviews/tascam-gigastudio-4


On the flipside to that, there's Pianoteq 6 who do physically modelled pianos. After going through several sampled piano libraries I settled on Pianoteq as my "forever" piano VSTi. It's less resource intensive than the gigasamplers and "feels" more playable and expressive in a way I can't rationally explain.

(It helps that some of my favorite producers and composers have used Pianoteq - for me that's Guy Sigsworth and Thomas G:Son, but the Ludovico Einaudi endorsement really clinches it for me.)

https://www.pianoteq.com/


I feel if you're going to such lengths to approximate physical instruments..maybe just record physical instruments? Synths can do things that are unable to be done physically, so why not use them to that end? Although, I understand sometimes it's an aesthetic choice, having a poorly-approximated physical sound.


Because playing the violin is hard and emulating the thing is a different challenge. Of course, sequencing a violin track with just note on/off and velocity (i.e. with a keyboard) is a poor-approximation. Better controls and using automation can be transfered back to other sounds for example. It's serious fun at least. We say to play music, after all.


We're well beyond the era of "poorly approximated" virtual instrumentals. Most orchestration you hear nowadays is virtual and developers like Sample Modelling make stuff indistinguishable from the real thing

I'd much rather be able to just plug MIDI into a plug-in to get say a saxophone line for a song than having to buy a top tier saxophone, learn to play it in a perfectly soundproofed room with great microphone, DAC, ect.


> I'd much rather be able to just plug MIDI into a plug-in to get say a saxophone line for a song than having to buy a top tier saxophone, learn to play it in a perfectly soundproofed room with great microphone, DAC, ect.

You could also ask someone who already knows how to play the sax to do it for you, and use a midi based sax sound until you have the score perfected as a stopgap.


The digital waveguide patents have now expired and a new group has just released a wind instrument software synthesizer based on this approach: https://www.imoxplus.com/site/

They are also developing a completely new type of multidimensional embouchure sensing mouthpiece with which to play the instruments. It should be easy to learn but offer deep potential for expressiveness.

Also note that the goal of these instruments is not to faithfully emulate the timbre of any existing instrument (for that use a sampler) but to emulate dynamic behavior with is where the true expressiveness of wind instruments comes from.

Disclosure: I am developing the mouthpiece.


In grad school, I had a tiny start-up with a classmate selling a virtual instrument doing just what you describe (for guitar sound synthesis). We had PDE models for the string, finger/string interaction, fret/string interactions etc. We would then discretize these PDEs to generate in real-time a synthetic "force" applied by the string on soundboard, and then convolve this in real-time with measured impulse responses of real acoustic guitar soundboards. Our website is still up (spicyguitar.com) and we give it for free now, but I don't know if it still works in recent DAWs.


Loaded it up and I can confirm it still works in FL Studio 32b and 64b versions. I really like the music box preset, very melodious with certain parameters.


Thank you!


Hello again :) There is this phenomenon called 'stick-slip' that is an essential component of any string based instrument that is bowed. It might be worth modelling that to see what it sounds like, you might end up with some very realistic bowed string sounds as well as a glass organ.

https://en.wikipedia.org/wiki/Stick-slip_phenomenon


FM synthesis can generate a very broad array of sounds. A lot of the time you get bad results but if you’re looking for something that can make all kinds of different sounds that’s one option. There are actually so many synthesis techniques, you should download some software synths and take a listen. Really fun stuff


I think FM synthesis is really underappreciated for modeling acoustic sounds. It really shines for mallet instruments, but also brass and woodwind and even plucked sounds like acoustic guitar. By making operators react differently to velocity and other modulation changes, you can create very expressive sounds. However, it requires quite some trial and error indeed.


The idea of a modular synths is to have the potential to create all of that, FM, AM, Sub and more. Isn't it?


Partially/Depends. Yes, they aim for more flexibility, but that doesn't mean they necessarily do a good job of providing the things you want for specific synthesis types.


Plucked string instruments are one of the simpler classes of instruments for physical modelling. Waveguide synthesis uses delay lines and filters to model a wave propagating along a string and reflecting off the bridge and nut.

The famous Karplus–Strong algorithm used a burst of white noise as the excitation, but I've had more success using an asymetric triangle-shaped impulse that resembles the shape of the drawn string.


Check out the sub synth in zynaddsubfx, its really easy to get started with it and makes a great sound.


There are physically modelled instruments like the one from Pianoteq. No samples, pure rendered. Very nice.


Slightly related: your pluck sounds reminded me of the beginning of "Tonto" by Battles (https://vimeo.com/117914655)


Since a lot of people here are technical, here's a fun tip: VCV Rack stores the rack as a (somewhat) human-readable JSON file.

A couple of friends and I are experimenting with collaborating using Github and pull requests.


That's pretty cool. How well do merges work if two commits change the same file?


I tried it and only ran into a couple of issues:

1. I was using VCV Rack on Windows, but I think my friend was not, so you may need to tweak Git's line ending settings

2. Some changes you just don't want to merge. The other person could have different audio output settings. There can also be rounding errors in the knob values so sometimes the values can be edited slightly even if they were 1.0 before, even if you don't think you changed them, which makes merges a little noisier. But `git add -p` made short work of cleaning that up.

Edit: This is also assuming that you coordinate who takes turns editing at one given time - "ping pong" is easier than free for all.


Software modular synth is really good for learning about sound and signal. I use Alsa Modular Synth[0], which has its faults, but has a simpler interface than VCV Rack. I use it some times even to filter my voice when role-playing monsters on our online dungeons & dragons game (I love how easy these things are to do with JACK on Linux, used to do something similar in PulseAudio with LADSPA filters, but it was much more of a pain).

That said, I got myself a Moog Mother-32[1] recently, which is a hardware semi-modular synth, and I found it much more musical than software alternatives. I could immediately create much more musical pieces than I could with the software – even though I have MIDI peripherals for the software and the software can create any module I can imagine.

[0] http://alsamodular.sourceforge.net/

[1] https://en.wikipedia.org/wiki/Mother-32


> Alsa Modular Synth

Another good option is SunVox; it's part modular synth, part tracker in a really cool UI. It includes a bunch of simple examples that demonstrate how to use the synth modules.

http://www.warmplace.ru/soft/sunvox/


Slightly off topic. I just added a link in that Wikipedia entry to the the Phone_connector_(audio) entry at the first occurrence of phone jack because I'd never seem that connector referred to by that exact phrase.

Back to the topic at hand. I've played with software gizmos, and recently picked up three secondhand units from the Korg Volca range. I'm not particularly musically talented, trying to change that over time, I find the physical controls way way way more intuitive.


I’d highly recommend getting comfortable programming a subtractive synth to anyone interested in making music, particularly electronic music.

I’ve been fooling around with music for about 20 years, but the path I took (trackers, which are sample based, then into VSTs with their huge preset libraries) plus a lack of self-discipline meant that I never really mastered making a sound from scratch on a synth. It’s only in the last few years since getting into hardware that I’ve started to learn this, and it has made such a difference to my music making - I can now often dial in the sound in my head without having to trawl through presets, or I can take a preset and either modify it to my taste, or work out what it’s doing and recreate a similar sound.

If I could go back in time and give myself some advice, it would be to pick up a basic hardware subtractive synth with knobs for each function on it and master making sounds on it from scratch. Something like the Korg Minilogue would be perfect. Alternatively, a good synth on an iPad or creating a good midi mapping for one VST synth and mastering it would do the trick, but I think there’s something about the hands on design of hardware, plus that you’ve invested money in it, that makes it the ideal learning platform (and a lot of fun!). The skills you learn doing this are transferable to any other synth and other areas of music making.


I highly recommend Syntorial (https://www.syntorial.com/), it took me from nothing to being somewhat ok in almost no time at all.


Yeah I did the first few parts of Syntorial and thought it was pretty great, maybe I’ll pick it up again one day!


Would you say that it's a good first project for learning programming from the beginning as well or would other (easier) projects be more suited for this task?


IMHO it's a very worthwhile lesson from a maths perspective, as it may motivate learning the theory, but it has nothing much to do with iterative, imperative, text based programming.

A good comparison would be some visual programming environments and modular buzz-machine type DAW programms, that both look very similar, employing the graph metaphor--nets of generators and filters. I'm not sure wgether that's comparable to functional reactive programming, that I have never tried.

Writing a synth in C would be very different, at any rate, I guess, something I don't even know how to go about in an ergonomic fashion.


Learning computer programming you mean? If you think creating something musical would motivate you, JavaScript using the WebAudio API isn’t a bad starting place, as it provides high level components such as oscillators and filters that you can plug together without needing to know the internals - for example, check out https://teropa.info/blog/2016/07/28/javascript-systems-music..., which is aimed more or less at newcomers to programming. Another good starting point if you are using Mac (and iOS, if you like) is AudioKit, which has some great interactive “playgrounds” and similar high level “blocks” of code as WebAudio, which you can wire together to get quick results.

If you want to write your own plugins or audio software you’ll need to learn C++ at some point, which I think is a fairly complex language to start with and likely to be frustrating in terms of getting quick results, but your mileage may vary!

A colleague of mine learned C++ as his first “real” language (which is now his job) because he started out playing around in SuperCollider (a DSL for computer music stuff) and wanted to turn his ideas into real plugins, but it can get pretty complex pretty quickly so I’d say you’d have to be pretty motivated to do this!

If you do want to go down this route I would probably start with the aforementioned SuperCollider or something similar (maybe Max/MSP) to get an idea of how the musical side of things works without having to master C++ at the same time, then when you feel confident you can start learning C++ - personally I’d recommend the JUCE framework as it is designed for audio apps, hides some of the complexity/gotchas of C++, and has some good tutorials for beginners: http://juce.com.

DSP stuff can get pretty maths heavy and I’ve not come across anything equivalent to the building blocks supplied by WebAudio/AudioKit for C++, although JUCE does now have some DSP modules such as filters which you can easily use, and there is sample code online... but you’ll probably have to get your hands dirty at some point :) Will Pirkle’s book on audio plugins provides quite a good intro to DSP but the code in the book is, to be honest, pretty outdated/bad style - he is apparently working on an updated version, but if you are able to take the code with a pinch of salt (e.g. write the examples yourself in JUCE rather than using his RackAFX framework) you might find it useful.

Feel free to reach out to me for more advice, my background is in web development but I’ve been working on audio stuff professionally for the last couple of years so have been through the learning process of C++ etc and would be happy to help if I can!


Thanks, yes, exactly. I thought perhaps I could combine these two fields since I very much enjoy making electronic music but I'm still intimidated by code. C and C++ in that regard sounds like the final bosses of the intersection between audio and computers, so I think I'll take up your suggestion and start with JS and the WebAudio API.


Haha, correct! Yeah that’s probably the best place to start :) There are loads of cool WebAudio apps out there for inspiration, e.g. check out https://blokdust.com/.

I’d check out that tutorial I posted, once you feel comfortable with the basics of JS you may also want to take a look at https://tonejs.github.io/, which provides a layer on top of WebAudio with useful functionality like synths, sequencers etc. Can save a lot of time!

There are many other great tutorials and open source WebAudio projects out there too. Have fun, hopefully you can post something you have made on here in the not too distant future! Like I say, feel free to give me a shout on email/twitter if you need more advice.


More synthesis technical aspects and techniques are well-covered by the dozens of articles in "Gordon Reid's classic SYNTH SECRETS series" on the Sound-On-Sound site. https://www.soundonsound.com/search/articles/%22Synth%20Secr...

Now 20 years old, still online, this is pretty much a complete course in theory and practice.


Here is github repo, containing all the articles: https://github.com/micjamking/synth-secrets


I got this from Brian Eno's Twitter feed: https://twitter.com/dark_shark/status/1122355076360572928


Imagine for a moment I'm a recommendation engine.

People who liked Brian Eno's twitter feed also liked...

Have you seen Reactable? http://reactable.com/


> Imagine for a moment I'm a recommendation engine

I wish there were one like that. If only we had some companies with big silos of our data, ML tech and server capacities.


I think that’s actually an Eno fan twitter rather than the man himself, I also follow it though, there’s some good content :)


https://www.syntorial.com/ is pretty much the best way to not just learn the technical, but how each part sounds.


Came to say this. It’s amazing software


If anyone is interested in getting started playing with synths, AudioKit Synth One is a great free, open source synth for iOS: https://audiokitpro.com/synth. iOS is a pretty cool platform for starting to play around because the multi touch nature of a touch screen makes it more fun (IMO) than using a mouse to drag virtual knobs around.


Thanks for the AudioKit Synth One shout out! We're working hard to improve open-source synths


Subtractive synthesis is sort of like the OOP of synthesis methods. It may not be the most flexible or interesting, but it’s predictable and ubiquitous and something any sound designer needs to learn.


This has been my favourite course to learn about Audio Synthesis; it's old school but has all you need to know, in a digestible format:

https://www.youtube.com/watch?v=atvtBE6t48M


I have an Arrick synth[1] (22 slots) and it can be fun to just explore the sound space that the filters can create. I also found it fun when I started building some FFT tools that I could use it as a complex signal generator and see the results in the FFTs I was computing. :-)

[1] https://synthesizers.com/


This is a pretty in-depth introduction to Subtractive synths, although I think it's probably a bit daunting for most beginners, particularly as the synths illustrated change, and in my experience showing a complex synth (even if it is made up of modules that a student understands) will make most people shut down in fear pretty quickly!

I'd suggest getting a simple synth (such as the ES M shown in the second image, or simple free VST synths if you're on windows, such as the PG-8X [1]) and spending some time playing around with the controls, and getting to know the sounds. There are usually 'Oh, THAT'S what that sound is' moments when playing around with the filters for the first time.

Once you're comfortable with that, move onto more complex synths (such as the Superwave P8[2]) and you'll find that many of them are not really more complex, but just have more of the sections you already know - a bit like learning a channel on a mixer, and then moving to a 72-channel mixer from an 8 channel one.

Modular synths such as VCV rack (mentioned elsewhere here) are really great for experimentation with the architecture of subtractive sound generation (as mentioned elsewhere in this thread), but for many they are intimidating initially, and have setup time cost if you're starting from scratch. I think they are for a certain kind of personality (myself included!), but not for everyone. The reason that the 'standard' architecture which is described in the OP exists is because people habitually used the same setup (osc > filter > amp with lfo and envelopes, as seen to a degree in the 'olympic rings' analogy), and manufacturers wanted to produce simpler, cheaper synths for a wider market. Often you end up making variations on that theme. The great advantage of software modular synths is that you can save (and load!) your setups, and you never run out of cables or modules (or money!).

[1] - https://sites.google.com/site/mlvst0/

[2] - http://www.superwavesynths.com/p8


I really liked the Superwave P8 back when I used Windows. Does anybody know if it can work in Linux?


We appear to be going through a surge of interest in synths. Maybe due to the those who were dancing back in the day now discovering how the sounds were made. I've become one of those fascinated with the tb303. Which, for all the misty eyes around the sound, is just a simple single transistor oscillator plus low pass filter with a sequencer. The huge amount of diverse info available makes studying the 303 a really fun intro to subtractive synthesis.


What about FM synths ? They seem even wackier to program.


FM synthesis is fascinating and extremely flexible but it’s notoriously difficult to master. If you are interested I’d recommend learning basic subtractive synthesis first since they have some concepts in common.


Oh, I'm alright with subtractive/additive synths ... I have a Novation. FM puzzles me to no end. I can handle 2 oscillators but 4 or 8 seems maddening.

I'm not sure if I need to approach it by trial and error, or there are a few tricks to program it!


If you take apart some more complex patches you can see they’re sort of composites of simpler patches. But then there are also some patches that are just very complex.

I think ML and GANs might be interesting to apply to FM patch design.


Indeed, one of the most prolific of all synthesists has even established himself as a leader on this particular topic - using ML techniques to create FM patches. And, it is simply bloody awesome:

https://www.factmag.com/2017/07/14/watch-aphex-twin-midimuta...


Elektron’s Digitone hardware is probably the most approachable FM synth I’ve come across. They’ve made some clever design decisions such as a fixed selection of ratios to make it easier to find musical sounds, and adding two conventional subtractive filters (one resonant, one more of a utility one to high/lowpass the resulting output) which makes it easier to tame the sound in an intuitive way. If you’re into FM I really recommend it, combined with their excellent sequencer which lets you lock any set of synth parameters to any step really quickly and great on board effects, you can get some excellent results.

I guess the closest software equivalent I’ve seen is Ableton’s Operator but I find the Digitone much more fun, not least because it’s hands-on hardware which encourages exploration!


FM is hard. Let the machines do the work. :)

https://www.factmag.com/2017/07/14/watch-aphex-twin-midimuta...


Midimutant is an evolutionary resynthesis system. These have been around for about a quarter century but I think they've not proven very helpful.

There is another way to let the machines do the work: interactive evolution. Brian Eno thought of this in 1995 (among others): the system presents you with N candidate patches, you select the ones you like, and it uses your selections to think of N new patches to offer you. Basically it's assisting you in moving forward through the space of patches in search of interesting stuff without having to program them.

Interactive evolution works really well for FM, but systems which provide it are rare. And now for the self-promotion. As it so happens, I just presented a paper on this a few days ago on my system.

https://cs.gmu.edu/~sean/papers/evomusart19.pdf https://github.com/eclab/edisyn


It depends on your goals and what your modes and means are as a musician. In terms of being helpful, systems which allow the musician a grip on the sound are as good as one another, essentially. Inasmuch as the purpose of advanced technology being applied to synthesis is as much a head game, its also about the ingredients of the jam. Music as a substance benefits from all approaches; even the lowliest of AWM boxes can be masterfully applied, by masters of the art ..

EDIT: Great work, adding your paper to my stack of read-todo's...


I have a bit of a reversing mindset coming from a security engineering background and find Reverb Machine (https://reverbmachine.com/synth-sounds) is a great place to see synth sounds from modern music built up from scratch (usually starting with just a plain old sawtooth init patch).


The world needs a modern LaTeX textbook on the electronics of synthesizers (With lots of math), but I don't think one exists


There is Hal Chamberlin's Musical Applications of Microprocessors which is a pretty approachable book and has some good background theory, there is also Chowning's FM Synthesis book.


I meant analog Synthesizers, where there are basically schematics, and some lectures from 20 years ago and that's it.


The circuits aren't all that different than they were 20 years ago, so the lectures are still valid :-). A good linear circuits text book will cover the basics of oscillators, filters, and amplifiers. Pretty much standard EE curriculum for undergraduates. The application for Music is somewhat incidental to their design.


The lectures I'm thinking of are literally 360p videos of a guy and a whiteboard, with no notes or legible writing IIRC.

And also, the tolerances in Synthesizers are actually fairly small and idiomatic synths use quite a few relatively obscure parts i.e. How many introductory electronics books discuss OTAs in any detail( for a slightly terrible example)?

I already know, however it's not easy to find out in one resource: The application for music is sufficiently obscure (Analog synths require much more coaxing than (say) a guitar amp) to warrant dedicated discussion.


Ok, I think I get it, but let me try telling you what I heard and you can correct where I get it wrong.

You are looking for an "introductory electronic book" that discusses the types of circuits that are used in analog synthesizers. Further, those discussions should be accessible (understandable) to someone with little or no prior understanding of linear circuit theory.

Is that a correct reading of the thing you are seeking? If so then I would start with something like the Sam's OpAmp circuits book. If you aren't put off by mathematics, and your original message suggested you were okay with that, then "The Art of Electronics" (Horowitz and Hill) the first four chapters cover pretty much all of the information you need to know to read any of the schematics on the Moog schematics web site[1]. Both books discuss filters, VCOs and VCAs, and transconductance as well.

As for precision, typically analog synthesizers are not nearly as precise as you might imagine. Like many instruments they were made to have a quality sound which may or may not be strictly accurate in terms of musical representation. One of the nice things about the Moog Model 15 was that you could tune it to different types of scales. You do want thermally stable circuits so that you aren't re-tuning all the time, but setting up in the studio I would typically use anywhere from 5 to 20 minutes with the 'high C' (1046 Hz) signal reference to tune in the various oscillators and amplifiers to get a nice 0 dB signal level at the final output and with the half dozen or so oscillators tuned to match frequencies. Not at all like a "modern" keyboard where you turn it on and blam! you're ready to play.

If I am still misunderstanding what you're asking I would like to understand that. You wrote "The application for music is sufficiently obscure ..." which sounds like you are looking for a specific tie into music in general. However the tie into music is, for the most part, entirely incidental to the mechanics of how these things are built so typically references cover the fundamental properties of these circuits without calling out their musical application which is seems to me to be fairly obvious once you know the fundamentals.

[1] https://moogfoundation.org/bob-moog-schematics-release-1-for...


This is timely. I've been getting into music and spent a couple of hours fiddling with the Korg 15 emulator on my iPad.


One VSTi I was always infatuated with was Glass Viper. I haven't been able to find anything that sounds quite like it.


Why is it called subtracrive when the signals are added or multiplied?


The sound making starts with creating “all” the spectral content (e.g a saw or square wave is a sum of infinitely many sine waves, up to the nyquist limit) and then the synth uses a filter to subtract some frequencies - typically all frequencies above a certain cutoff. So the term subtractive comes from the fact that it first makes a lot of spectral content, then removes some of it.


I see, thanks. I never thought of filters like "cutoff" as subtractive, but I guess that's what they are.


I believe it's named this way because filters are used to subtract harmonic content.




Guidelines | FAQ | Support | API | Security | Lists | Bookmarklet | Legal | Apply to YC | Contact

Search: