KS is one of the most interesting algorithms to experiment with. You can modify, multiply and recombine it with itself in a huge variety of ways, and always get new and weird sounds out of it.
In fact it forms the basis of a large branch of DSP for sound effects and synthesis, physical modeling. Of course many techniques can be used in physical modeling, but often it's actually delay lines all the way down, and delay lines are really a generalization of the KS algorithm with a better interpolation. Many crucial effects in digital music such as chorus, phaser/flanger, and algorithmic reverbs are also just delay lines fed into each other.
Then there are masters of the genre that create jaw-dropping effects and instruments by making massive use of delay lines - I'm looking at xoxos:
Every time I delve into audio DSP I'm a little amazed at how simple a lot of the techniques are in an algorithmic sense. The FFT is a notorious outlier but most effects are elaborate compositions of beautifully simple pieces of code that warrant pages of mathematical background.
It's starkly different from "core CS" because almost everything operates on arrays and pure functions. Yet at the same time it presents an immense design space for both software engineering and UX - most of the "secret sauce" in an audio DSP product comes from exposing the right parameters in the right combinations. A low level tool like Max/MSP or PD leads to a different workflow from a "pick the presets" VST plugin, in the same way that programming languages are targeted against certain problem domains.
I find it very interesting too, like you said, that every effect or technique boils down to delay lines. It's similar as to how every sound boils down to a combination of sine waves and if you think about it, delays and everything periodic like musical sequences can be expressed as functions of sine waves. So, with these thoughts, the god particle of music and audio is the sine wave.
When Alex demoed their first hardware, he'd hold an ice cube against the custom vlsi chip to get it to work at high enough frequency. Early overclocker!
"The authors have designed and tested a custom n-channel metal-oxide semiconductor (nMOS) chip (the Digitar chip), which computes 16 independent notes, each with a sampling rate of 20 KHz." Kevin Karplus and Alex Strong, "Digital Synthesis of Plucked-String and Drum Timbres", Computer Music Journal, Vol. 7, No. 2 (Summer, 1983), pp. 43-55. That was way more voices than they could get from microprocessors of the day.
In the early 1980's, the big excitement in Stanford EE and CS was that you could take the new VLSI course and they'd actually fab your chip for you if you made it to the end of the course. (Actually, each student project just got a small section of the whole mask; enough chips were manufactured so that everyone could get a few, and they'd just wire the pins to different pads for each particular project's ICs.) I remember Alex showing off his first working Digitar chip to my officemate at the time, Danny Sleator (who is also credited in the paper), using the ice cube technique.
> Karplus-Strong guitar synthesizer implemented in JavaScript using asm.js and Web Audio.
Are the asm.js annotations required for this to work in Firefox? I thought the primary use case for asm.js was converting low-level code into something that could run efficiently in JS, and not so much for handwritten JS. And the demo seems to work fine in Chrome, which has no optimizations specific to asm.js that I know of.
The initial version without any optimisation at all didn't run in real-time, so I started down the asm.js route for fun. With the refactoring done to accommodate asm.js, it does actually work just fine in Firefox even if asm.js compilation fails.
Despite asm.js's main aim as a compilation target, I've seen a number of developers write it by hand to squeeze extra deterministic perf out of JS. Emulator devs have been doing this recently, IIRC. It's really fascinating, I genuinely want to take a look at it myself.
Does anyone know if there is an area of study where sounds are physically modeled using a 3D model of the 'instrument'?
Something like:
* model a room
* model an 'instrument'
* place a virtual microphone
* hit/trigger the instrument model
* render sound waves produced by the model and sample the incoming waves at the microphone
It's doable, but it's incredibly inefficient. It makes more sense to use waveguide models or procedural simulations, because they produce similar results for a tiny fraction of the cycles.
There's a lot of non-linearity in the physics, and any rendered model has to allow for that. So it's a long way short of real time synthesis, even with a GPU.
Karplus-Strong is the entry-level version of various complex waveguide and resonator models, and the technology has been used in commercial products for more than twenty years now.
The commercial problem is that for many applications, multi-sampling sounds better. Most musicians don't want experimental sounds, and those who do prefer other ways to make unusual noises - preferably more intuitive ways that are easier to control.
So there's limited commercial demand for waveguide modelling, and no demand at all for a full-fat 3D rendered synthesis engine.
The Wikipedia page about Karplus–Strong string synthesis says that there is a refinement of the algorithm that could also be used to model acoustic waves in tubes and on drum membranes.
I wish a JavaScript MIDI player could use those high quality audio synthesis techniques to play MIDI files.
There are lots of VSTs that do about the same thing. In the xoxos VST link posted above, check out Pling. If you want the exact same algorithm in a VST plugin, you could port it to Lua in my scriptable plugin. In fact someone made a nice Karplus-Strong script for it a couple of days ago (I should be merging it instead of making HN comments):
Neat! I did this a couple of years ago. My version is here (choose "Acoustic Guitar"). Your version is a little more automated in note production, but definitely more robust and a bit catchier. ;)
It's strange that it's so very far away from a real acoustic or electric guitar sound. Seems something like a piano is a lot easier to simulate with a few damped sine waves?
I think the main advantage of Karplus-Strong is that it is (or at least was) cheaper and more simple to implement, than to for example an algorithm generating a similar sound additively from sine waves. AFAIR the original paper describes an implementation using an impulse, a simple circular delay buffer and an averaging (addition + single right shift) filter.
Simple additive synthesis usually falls short for something like a piano, too, since there are a lot of factors that influence the frequency content of a single struck note. This was posted a while ago, describing Pianoteq's approach to piano modeling: https://www.pianoteq.com/tutorials?play=modelling
Yeah that's what I meant. The piano there sounds rather good. Yet guitar simulations sound absolutely horrible.
Or maybe it's because I don't actually play the piano...
It is really cheap to implement -- you can use a shift-xor random bit generator to get the blend factor of ½ that they mention. The whole thing was about 10 8086 instructions, as far as I recall (it's been twenty years since I played with it).
Here's a paper with more extensions to the Karplus-Strong algorithm (+ analyses thereof):
(I haven't tried it out so I don't know how good it sounds.)
There's also a master's (or phd?) thesis from DIKU (Datalogisk Institut ved Københavns Universitet) from 15-20 years back where a guy "rewired" sounds played on one instrument to sound like they were played on other instruments. I saw his presentation/demonstration back then, and it sounded pretty okay. It's in English and should be available somewhere on DIKU's website. Darned if I can recall the title or the author, though.
there's also 3 good basic textbooks on synthesis/sound design by Rick Snoman (Dance Music Manual), Martin Russ and Brian Shepherd that have specific recipes ("detune a saw and square by 3 cents...)
___________
and if you don't have a synth, ask around for somebody that has Logic Pro X (several good synth models included) or Ableton with VST or AU's installed (you have to pay e.g. $100 for Analog unless you have Ableton Suite).
A good book that provides a quite detailed overview of some physical modeling algorithms (and tons of other synthesis algorithms) is The Computer Music Tutorial by C. Roads. With regards to Karplus-Strong for example, it provided an in-depth enough explanation of the algorithm for for me to base an implementation solely on it. There is another description of a flute instrument model, using noise, filters and delay lines.
In fact it forms the basis of a large branch of DSP for sound effects and synthesis, physical modeling. Of course many techniques can be used in physical modeling, but often it's actually delay lines all the way down, and delay lines are really a generalization of the KS algorithm with a better interpolation. Many crucial effects in digital music such as chorus, phaser/flanger, and algorithmic reverbs are also just delay lines fed into each other.
Then there are masters of the genre that create jaw-dropping effects and instruments by making massive use of delay lines - I'm looking at xoxos:
http://www.xoxos.net/vst/vst.html#nature