
Four common mistakes in audio development - bpierre
http://atastypixel.com/blog/four-common-mistakes-in-audio-development/
======
dmytroi
I really love time restricted environments, in my opinion they truly liberate
programmers: instead of using language/library/tech/pattern/etc-of-choice they
suddenly realize "oh, we don't have time for that". Do we need GC? We don't
have time for that. Do we need to allocate memory? We don't have time for
that. Can we maybe do this enterprise-like tens thousands source code files
OOP hierarchy thing? We don't have time for that.

Realtime forces people write less, writing only what actually matters, and all
this in my opinion helps people to be better engineers (in-a-way). I just wish
it looked more attractable for people - hacking your way out in time/resources
constrained systems (like MCU's for example) can be as fun as hacking html/js
to make your site behave as you want.

~~~
zeta0134
I've been spending a great deal of time working on a game engine for a
Nintendo DS unit, which is a very constrained little system. Source here if
anyone's interested.

[https://github.com/zeta0134/pikmin-nds](https://github.com/zeta0134/pikmin-
nds)

The NDS is an odd little system. Despite supporting 3D graphics, it entirely
lacks useful things like a floating point unit, or a hardware divide. All of
your math ends up being done in 12.20bit fixed point. Square roots are already
slow on normal systems, and they're just ridiculously slow on this thing, plus
you've only got something like 550k hardware cycles per frame to get
everything done.

Getting over 100 objects to draw, have basic physics applied to them, and
perform AI in a reasonable amount of time has led to some eye opening
revelations. Right at the start of the project I spent ages trying to come up
with the fastest way to sort all of my objects back to front, to make the
renderer work. I was just sure this was going to be the slowest part of the
draw, so I spent probably two weeks on sorting implementations alone before
finally giving up and choosing a std::priority_queue just to have something
that worked. Then I actually tested the thing, and came to the (obvious?)
conclusion that I had wasted a great deal of time. The std::priority_queue was
handling 100 or so objects really really fast, and the system was lagging
badly because the drawing itself was poorly optimized and needed to be
revisited. I had to start actually profiling my code, and the recurring theme
was that the bottlenecks were almost never where I thought they would be, and
usually surprised me.

It's a whole ton of fun, I'll say that much. Not sure if I'll end up doing
anything useful with the engine once it's done (obviously I cannot do anything
meaningful with a Pikmin clone due to IP reasons) but I'm enjoying just being
able to push the limits of the system.

~~~
Splines
Just curious - how did you profile code running on the NDS? Emulator?

~~~
patresi
I believe this is what OP is referring to
[https://imgur.com/a/YMi6K#97](https://imgur.com/a/YMi6K#97)

~~~
zeta0134
Ah yes, that was an earlier and much more crude hack. That's remarkably
simple: just change the background color of the bottom screen at any time.
Since the screen is being drawn at the same time your code is running, the
results literally show up in realtime.

A huge downside with this technique is that color changes during H-blank
(waiting period between scanlines) and more critically V-blank (rather long
waiting period after each full screen draw) cannot be seen. When I was using
this technique, I had the engine actually wait until all of V-blank had passed
before it started doing the timing colors. This still ended up being pretty
hard to read of course, as the colors would change flicker rapidly at 60FPS,
couldn't be easily saved to view later, and were complicated significantly by
the multi-pass nature of the engine, where each visible frame is actually
composed of several hardware frames, all with totally different timings. So I
scrapped that method almost immediately and wrote text-based profiling code
that used the built in hardware timers.

------
splatcollision
Just reading the intro so far, not an audio developer particularly, but wanted
to quote this:

> although there is a high horse present in this article, consider me standing
> beside it pointing at it, rather than sitting on top of it.

Something we can all aspire to!

~~~
kranner
Seneca in
[https://en.wikisource.org/wiki/Moral_letters_to_Lucilius/Let...](https://en.wikisource.org/wiki/Moral_letters_to_Lucilius/Letter_27)

"What," say you, "are you giving me advice? Indeed, have you already advised
yourself, already corrected your own faults? Is this the reason why you have
leisure to reform other men?" No, I am not so shameless as to undertake to
cure my fellow-men when I am ill myself. I am, however, discussing with you
troubles which concern us both, and sharing the remedy with you, just as if we
were lying ill in the same hospital. Listen to me, therefore, as you would if
I were talking to myself. I am admitting you to my inmost thoughts, and am
having it out with myself, merely making use of you as my pretext.

------
iammyIP
These mistakes basically boil down to not using slow scripting languages and
making the core buffer loop run fast and undisturbed. For realtime synthesis
use c or c++, optimise early, use trigonometry approximations in range -pi /
pi, do not use lookup tables, they unnecessarily fill the cache, cpus are fast
enough, write the core loop branchless, write for cache, use vectorisation and
function pointers. Do not use anything fancy, simple int32_t, float and
vector/array is enough (some filters need double precision though). Do not
copy stuff, point to it. Precalculate what can be precalculated, e.g. dont do
x / samplerate, do x * samplerate_inverse. Check for denormals.

~~~
davedx
That's not my takeaway from the article. It's not saying don't use "slow
scripting languages", it's saying avoid language features that are not safe
for real-time programming and give the audio thread enough time to do its
thing.

In my experience, writing everything in C/C++ has its own host of problems.
What's worse, a GC stutter or a segfault? :)

~~~
wrl
Writing everything in C/C++ has its own host of problems, but it's roughly the
name of the game in anything realtime. It's easy enough to think about writing
one effect or synth in a slower language, but keep in mind that modern digital
audio workstation software is expected to run up to hundreds of plug-ins to
produce one block of audio, and that's just not possible without being
directly on top of the metal. Even vanilla C/C++ often isn't enough;
generally, there's a necessity to drop down into assembler or at the very
least use SSE intrinsics.

~~~
neoeldex
Realtime often means: when there's not enough time, dont bother. Not going as
fast as possible. Which are 2 separate issues. In my opinion, at least

~~~
wrl
How are they two separate issues? If you don't go as fast as possible, you can
get less work done before your deadline. Even with today's CPUs and the kind
of parallelisation that a good DAW program will do, I still routinely see
producers (myself included) hit CPU usage limits. In audio DSP, programmers
have responded to the increase in CPU capacity not with higher-level languages
but instead with more computationally expensive algorithms (modern filters,
for example, use techniques derived from circuit simulators for solving
nonlinear equations).

Besides, actually writing the DSP code makes up a comparatively tiny portion
of the development time. Tuning and tweaking the DSP algorithms takes far
more, and designing the user interface dwarfs both. I know multiple audio
products which use Lua for their UI layer because the productivity increase is
so significant. And there, you don't have to worry about performance to
anywhere the degree you need to in the DSP.

------
exDM69
> It can be helpful to know that, on all modern processors, you can safety
> assign a value to a int, double, float, bool, BOOL or pointer variable on
> one thread, and read it on a different thread without worrying about tearing

This is true on the CPU side (for some CPUs) but what about compiler
optimizations?

Using "volatile" should make stores and loads happen where they are written in
the code, but can it be relied on in multi threaded use cases? It's generally
frowned upon for a good reason, but perhaps it's acceptable if you're
targetting a limited range of CPUs (the article seems to be focused on iOS and
ARM only).

A safer bet would be to use __atomic_load and __atomic_store (from [0] or [1])
or C11 atomics if you have the appropriate headers for your platform. They
provide safe loads and stores for all CPU architectures, and provide the
appropriate memory barriers for cache coherency (for architectures that do
care).

[0] [https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-
Builtins...](https://gcc.gnu.org/onlinedocs/gcc/_005f_005fatomic-
Builtins.html#g_t_005f_005fatomic-Builtins) [1]
[http://clang.llvm.org/docs/LanguageExtensions.html#langext-c...](http://clang.llvm.org/docs/LanguageExtensions.html#langext-c11-atomic)

~~~
Unklejoe
> [Using "volatile" should make stores and loads happen where they are written
> in the code...]

Really? I was under the impression that "volatile" basically did nothing other
than prevent the compiler from optimizing out a variable that appears to not
ever be written to, but is actually written to via an ISR or something.

Also, how does the compiler usually handle global variables? Since they can be
modified by some code that was linked in, does it assume them to be volatile?

Of course, this issue is slightly different than the atomic-ness of certain
data types. We're talking more about situations where you have a series of
atomic operations which you expect to happen in the order of which they're
written.

~~~
exDM69
> Really? I was under the impression that "volatile" basically did nothing
> other than prevent the compiler from optimizing out a variable that appears
> to not ever be written to, but is actually written to via an ISR or
> something.

I think it also stops loads and stores to be reordered with other volatile
loads and stores, and yes, it indeed is typically useful for memory mapped
i/o, interrupt handlers and such. But most of the time it's not the right
thing to do.

> Also, how does the compiler usually handle global variables? Since they can
> be modified by some code that was linked in, does it assume them to be
> volatile?

No, they're not considered volatile but may have stricter guarantees with
reordering than local variables (esp. when combined with calls to foreign
functions).

If multithreaded code needs to be correct and portable, using locks or atomics
is the way to go.

~~~
vardump
> I think it also stops loads and stores to be reordered with other volatile
> loads and stores, and yes, it indeed is typically useful for memory mapped
> i/o, interrupt handlers and such. But most of the time it's not the right
> thing to do.

It only prevents reordering by the compiler. CPU can still reorder them as
much as it pleases, volatile doesn't affect it at all.

Of course you're pretty safe on x86. Not so on other platforms. Typically you
find this kind of issues when porting x86 code to some other platform. X86
writes are always in order with other writes. Loads are always in order to
other loads. Just need to remember loads are sometimes ordered before stores.

Again, volatile does not prevent CPU from reordering stores and loads.

~~~
Dylan16807
If the compiler isn't preventing CPU reordering, then it has been _impossible_
pre-2011 to use volatile for its original purpose, communicating with memory-
mapped device registers. That sounds like a broken compiler to me.

~~~
vardump
Yeah, it'd not be very nice when DMA start bit in some DMA control register is
poked before DMA address is still unset!

Those architectures that do this typically provide specialized always
serialized instructions for communicating with memory mapped devices. If not,
you need to have proper fences to ensure right order.

X86 stores are always in order (except for some specific instructions, but
they're no concern in this context), so it doesn't need this.

------
aidos
_> It can take just one glitch during a live performance for a musician to
completely lose faith in their whole setup_

I was an early adopter of digital vinyl (if you haven't seen it before, the
records have a screechy signal that software can use to determine the position
of the needle on the record, which it then maps to an audio file).

A friend of mine purchased the most popular unit at the time but it was really
unreliable. It crashed once while a club full of people were dancing and that
was the end of it for me. I switched to a unit by a small company (Serato)
that had just been released (2004) and never looked back. The unit itself
still works perfectly, there was a bug back in about 2008 that they tracked
down and patched for me.

Apparently the original brand has now caught up technology-wise and they're a
big player, but I will never, ever, ever buy their kit. Reliability issues
with audio gear can completely destroy your trust.

~~~
Unklejoe
I saw a DJ using one of those digital encoded records a few weeks ago. Pretty
cool concept, although I'm not sure what the actual difference is between that
and using something like Serato which has the smaller wheels you can turn (if
I remember correctly), other than being more prone to physical damage.

I guess one advantage is that it looks like they're playing the actual
records. It had me fooled for a minute.

~~~
aidos
Haha, to be fair - they _are_ playing actual records :)

At the time neither the tech nor acceptance were there for the modern tools.
While increasingly there were CDJs around, they didn't feel the same for
people who did any scratching.

Also, turntables were everywhere you went, so you could take everything you
needed in your bag (laptop, 2 x records, little conversion box) and you could
just plug into an existing setup, anywhere, without issue.

Digital vinyl bridged the gap as people started moving over. Also, at the time
the software only worked with records and cds. Now you have a huge selection
of controllers to choose from.

I'm actually about to sell all my stuff to buy a "controller" (something like
[https://www.google.co.uk/search?q=reloop+terminal+mix+8&tbm=...](https://www.google.co.uk/search?q=reloop+terminal+mix+8&tbm=isch))
so that I have something convenient and always ready to go.

------
radarsat1
Or just use FAUST: [http://faudiostream.sf.net/](http://faudiostream.sf.net/)

Seriously, check it out, it's awesome. Write your audio DSP in a language
suited to it, compile to efficient C++ (among other languages) and optionally
embed it in a (huge) variety of environments. (even Javascript for webaudio)

It's a functional description language that describes a DSP graph, and can
generate block diagrams too. Not to mention, it has libraries containing any
filter you can imagine. Highly recommended.

~~~
mhroth
You can do something similar (translate to fast C) with Pure Data (Pd) [0], a
graphical audio programming language, by using Heavy [1].

[0] [http://puredata.info/](http://puredata.info/)

[1] [https://enzienaudio.com/](https://enzienaudio.com/)

~~~
radarsat1
Cool, I hadn't heard of Heavy. It had me until "use our cloud-based service to
instantly generate high-performance code for x86 and ARM platforms." The day I
switch to using a hosted service to compile my code...

Still, nice idea.

------
emiliobumachar
So, I assume none of those popular OS'es has priority inheritance[1].

Even though it's a concept from realtime computing, I though it would be
widespread in general-purpose OS'es as well. Really, it seems like a useful
feature for any OS that implements priority at all.

What would be the downsides of having it in a general-purpose OS?

The ones I can think of are development cost and processing overhead.

[1]
[https://en.wikipedia.org/wiki/Priority_inheritance](https://en.wikipedia.org/wiki/Priority_inheritance)

~~~
fl0wenol
It's available, but the concept has issues when implemented simplistically on
a multi-processor system. (I'm not familiar with the arguments as to why;
sorry). You also have to request this behavior explicitly, and I think under
certain circumstances it can add latency or additional context switching
overhead that might otherwise be avoided between your application threads.
Also, in some environments your application has to have elevated privileges to
boost thread priority, which carries risk.

------
benwad
Since we seem to have a lot of audio programmers in here, does anyone have an
opinion on using non-C/C++ languages for audio development? I've always used
C/C++ but newer systems languages like Go and Rust (both have e.g. PortAudio
support) seem quite well suited to the task.

~~~
jerf
You'd have to write in a bit of a dialect of Go to make sure you don't get bad
GC behavior. It's possible in Go, unlike many scripting languages, to write in
an allocate-up-front style that doesn't generate garbage, in pretty much
exactly the same way you'd do it in C. If you write in the C style, you get a
lot of C-like behavior, only with memory-safety. But you don't get much
support from the runtime for this style; for instance, if you accidentally
append to a slice beyond its capacity, the runtime will simply reallocate the
array, there's no way to ask it to not do that and throw an error instead or
something, beyond not using "append" (hence my comment about the "dialect").
('Course, there isn't really one in C either; arguably reallocation is better
than a segfault here, as one merely _risks_ an audio pop at some point whereas
another guarantees it.) There is also no way to prioritize a goroutine.

I think people end up _overselling_ the difficulty here, because people often
strike me as speaking as if the 10ms maximum GC delay is actually the minimum
or something and as if the GC is running uncontrollably keyed by a random
number generator or something, rather than the amount of garbage generated.
It's not that hard to imagine an audio application that runs a 50us GC every
several minutes or even less often if you preallocate everything. In practice
it's not entirely dissimilar to writing in C and avoiding malloc.

But the real level of difficulty is still something that should make you
think, especially as you move from "a single filter" to "a full synthesizer
suite".

Rust would generally be a better language, as long as you can step up to the
somewhat-harder language, and can find the libraries you need. The payoff for
the somewhat-harder language is that you'll have much better assurances about
many of the relevant issues. It's what I would generally choose for this task,
again, given libraries. But if someone did want to use Go because it's a bit
simpler of a language, I wouldn't immediately breath flames on them, unless
they were being obviously too ambitious.

~~~
pconner
> It's possible in Go, unlike many scripting languages, to write in an
> allocate-up-front style that doesn't generate garbage, in pretty much
> exactly the same way you'd do it in C

I am very interested in this. Do you have any links with more info?

~~~
jerf
Basically, the link would be the language spec. Go's structs are "value
types". Until you deliberately take a pointer or access them through an
interface, they're just structs, like C. It also has real arrays.

In many ways, Go is like a scripting language, but it does let you do some
systems-like things in a way that Python or Perl or the other similar
languages completely don't.

------
bartl
I'm a bit amazed he doesn't even mention [double
buffering]([https://en.wikipedia.org/wiki/Multiple_buffering](https://en.wikipedia.org/wiki/Multiple_buffering)),
a system that was already used in old video games to avoid flicker, as a way
to draw on scene screens and only pass it on to the video stage once the scene
is complete.

All you would need here is 2 (or more) copies of the shared data structure and
a few pointers to the data structure. You fill in a draft version of the data
structure and only change the "read" pointer to point to it when it's ready.
Changing that pointer is, I would hope, atomic. You can then change the
"write" pointer to point to a different copy of the data structure to work on.

To make sure the client only reads consistent data, it can make a copy of that
pointer before it starts processing, because the pointer itself might change.

Using 3 buffers instead of 2, and if you're sure the processing of a buffer
takes less time than the switching cycle time, you can be sure your buffer
data will not ever change in the meantime.

~~~
JoachimSchipper
Double buffering isn't very suitable for audio. You're right that a suitably-
large buffer would allow non-realtime code to "render" e.g. a tenth of a
second of audio and pass it off to the realtime audio thread (and/or the audio
subsystem, if that has sufficiently-large buffers on your platform).
Unfortunately, such large buffers also introduces significant latency; the
added latency is fine if you're a music player, but is pretty nasty for
interactive applications.

~~~
squeaky-clean
The audio doesn't need to be double buffered. I think GP means you can avoid
the author's examples with locks by double buffering the array of keyboard
keys being pressed. You don't need to lock either the UI or audio thread, and
can ensure they're both seeing uncorrupted data.

------
cageface
I spent the better part of the last few years working on my own audio apps.
It's definitely the most difficult programming domain I've worked in. Real
time requirements + threading + low level code makes for a very challenging
environment. But it's also a lot of fun. Using the tools the author describes
here can save you a lot of headaches and let you focus more on the fun part
though.

------
ukyrgf
I wasn't even aware I WAS living in a "post-Audiobus/IAA world". So these
are... iPhone apps?

~~~
pierrec
Yeah, the article reads like it's going to be somewhat platform-agnostic until
you're down a few paragraphs and it blasts you with iOS stuff as if it was the
only platform that mattered for realtime audio. I'm more of a mind that
realtime creative audio on iOS is likely to be remain pretty niche.

------
bwindels
Great article. Interesting that modifying and reading word-sized values is an
atomic operation on ARM. IIRC on x86 this is not the case because values can
be cached on the cache of the different cpu cores, and thus be out of sync.
Does someone have a more detailed insight into this?

~~~
AstralStorm
ARMv7 works on deferred consistency between cores and memory. So, even if the
write itself is "atomic", which should rather be called tear-free, there is no
inherent synchronisation of writes and reads to the same memory location.

C11 and C++11 atomics also guarantee ordering depending on the memory model
parameter.

Look up Sutter's talk titled "Atomic Weapons" for more detail.

~~~
bwindels
Interesting talk! Here's the link:
[https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-
Beyond-20...](https://channel9.msdn.com/Shows/Going+Deep/Cpp-and-
Beyond-2012-Herb-Sutter-atomic-Weapons-1-of-2)

------
sehugg
It kills me that it's 2016 and pulseaudio stutters when I move a window (at
least on my system).

------
delinka
"Don’t use Objective-C/Swift on the audio thread."

Nothing wrong with using Swift (the language) to render audio in realtime on
the audio thread. I'd change the advice above to: don't send messages to
Objective-C objects during your audio thread.

~~~
callesgg
Swift will need to do garbage collection at one time or another and then you
are fu*ed.

~~~
plorkyeran
Swift is reference counted and does not pause at unpredictable times.

~~~
vardump
Reference counting is a form of garbage collection. You can have avalanches of
objects being deallocated when as a consequence of one object refcount
reaching 0.

You don't want any allocation or deallocation when you need realtime
performance.

~~~
plorkyeran
An object's refcount being decremented is not an unpredictable time for a
pause. If you never allocate or release objects on your audio worker thread,
it will never be paused for GC, as the person I was replying to implied could
happen.

~~~
vardump
Sure. But same can be said for pretty much any garbage collected system.

If you don't create objects or cause objects to become unreachable, you also
won't be creating garbage. So you won't have GC pauses. Unless the GC system
is braindead and runs even when there's no heap pressure.

~~~
plorkyeran
With a stop-the-world garbage collector, allocations on other threads can
result in your audio thread being paused, so you have to not be allocating
objects on _any_ thread, which is a significantly more difficult requirement
than simply not allocating on your audio thread while audio playback is
occurring.

------
fenomas
> Any app that does audio has at least two threads of execution: the main
> thread, and the audio thread

As a side note, I sure wish browsers would hurry up and implement web audio
workers so that this could be true for me!

~~~
eterm
Why don't we have have sandbox execution environments in the OS itself?

Why do we have a web browser at all and not just a way of running arbitrary
code from a network in a sandbox? We'll end up re-imagining the browser as an
OS with workers for anything an OS could do anyway in the end if we continue
to need to have the browser act as an application runner.

~~~
vertex-four
> Why don't we have have sandbox execution environments in the OS itself?

Well, if you try, you wind up with something that looks an awful lot like a
web browser - especially once WebAssembly is a thing. The API that many
desktop operating systems provide is not designed for the security model that
you're looking for, so you wind up building a new one on top of it - see WinRT
when Microsoft needed a sandbox.

We used to have things like Java Web Start (still do in some enterprise
systems), and, well, it's not exactly any better than a web browser except
that it has a better view layer for applications than HTML/CSS. It's also not
supported on mobile platforms.

------
amelius
> If it doesn’t, the buffer runs dry, and the user hears a nasty glitch or
> crackle: that’s the hard transition from audio, to silence.

It would be awesome if we could prevent this crackle somehow on a lower level
of abstraction. What I mean by that is that if the buffer runs dry, the
hardware (or the OS/audio driver) could do some prediction in order to bridge
any gaps in the audio more nicely.

~~~
Kliment
A lot of audio hardware does this - it switches the audio off only when the
signal is at a zero crossing. So it can be implemented by the OS by telling
the codec that the zero value is an underrun rather than a desired signal.

~~~
amelius
Interesting. So what would happen if a sine wave of, say, 1Khz is suddenly
shut off by a buffer underrun?

~~~
cyphar
It would be cool if the hardware did some Fourier analysis to resample the
buffers it gets (so it could make them run for longer if the buffers run dry),
but it probably would cause some kind of latency issue. I reckon that just
avoiding buffer underruns is less overhead.

~~~
Bromskloss
I had to cut off an audio file abruptly once, and I found that I could smooth
out the abruptness by quickly fading in a reverberated version of the audio,
just as the original audio was about to end, and then letting it ring for a
fraction of a second after the original had ended.

(I say "fading in", but it might have been that I had the reverb applied but
dry, and transitioned to wet just before the signal ended.)

~~~
dankoss
A colleague of mine actually patented this technique:
[http://www.google.com/patents/US8538038](http://www.google.com/patents/US8538038)

~~~
Bromskloss
:-/

~~~
cyphar
If you have any evidence of prior work to that patent (I really hope you do),
then you can topple that patent. If your code to do what you said isn't free
software I'd recommend you release it as free software now.

~~~
Bromskloss
I can't check right now, but it's possible that the patent predates my use of
the technique. Even if not, I don't know how I could prove it.

By the way, it wasn't done in code; I did it manually in Ardour [0].

[0] <[https://ardour.org/>](https://ardour.org/>)

------
webscalist
Better article: [http://www.rossbencina.com/code/real-time-audio-
programming-...](http://www.rossbencina.com/code/real-time-audio-
programming-101-time-waits-for-nothing)

~~~
mobiuscog
I wouldn't say 'better', they're both useful.

------
JoeAltmaier
Another common mistake: off-by-one error. Even a single zero or duplicate
sample value is clearly audible! Its amazing how sensitive we are to audio
artifacts.

~~~
iammyIP
That's not necessarily a off-by-one error, but a broader problem of audio
synthesis, which you can generally deal with by applying low pass filtering.
For example depending on starting phase of a sinus oscillator multiplied by a
simple linear volume envelope (1.0 -> 0.0) you will get a click at the first
sample, which is loudest at multiples of pi/2 for the starting phase (cos). So
you need to low pass filter the envelope to get a quick fade-in of the
oscillator without click. How you do this and how that sounds is up to you.
The original hammond organ similarly got some nice analogue clicks as initial
transients because of this 'problem' as a side effect btw, in digital its much
harder and expensive to make nice curves. De-clicking an audio engine properly
is certainly an important and sometimes frustrating task.

------
zongitsrinzler
What song is it in the 4sec sound clip. Sounds so familiar, like something
from Daft Punk?

------
gradinafrica
The Firefly references in this article are on point.

------
tantalor
This is also true of animation threads.

------
camperman
I remember using a circular buffer for a MOD/S3M player I wrote many moons
ago. I think you called a hardware interrupt to enable the Gravis or
Soundblaster to read from a block of memory and send it out to the speakers at
the right frequency, and then made sure that you fed enough data into the
buffer. It wasn't even concurrent - every loop, just mix enough of the track
to fill the buffer and write it. Simpler days...

~~~
exDM69
> It wasn't even concurrent - every loop, just mix enough of the track to fill
> the buffer and write it.

This probably works but the proper way to do it back then (and the way drivers
work today) was to hook an interrupt that gets fired when the audio buffer
needs to be filled.

So there's an element of concurrency in it since you don't know when the IRQ
is going to fire. Synchronization was easier with single cores, though. Just
disable interrupts and you're done.

~~~
camperman
> This probably works

No probably about it - it worked like a charm.

> but the proper way to do it back then (and the way drivers work today) was
> to hook an interrupt that gets fired when the audio buffer needs to be
> filled.

Which was much more trouble than it was worth when writing a 64k intro on a
deadline.

------
joshkpeterson
How will web assembly affect audio development in the browser?

